Riverbed SteelScript for Python¶
Welcome, this is the documentation for Riverbed SteelScript for Python. The core SteelScript is a collection of Python modules that build upon REST APIs and other interfaces to interact with Riverbed appliances and software.
Quick Start¶
If you already have pip, just run the following (in a virtualenv):
$ pip install steelscript
$ steel install
Not sure about pip, but you know you have Python?
Download
steel_bootstrap.py
Run it (in a virtualenv):
$ python steel_bootstrap.py install
SteelScript SteelHead¶
The SteelScript SteelHead offers a set of interfaces to control and work with a SteelHead appliance. It comes pre-configured with a couple of example scripts showing how to interact with a SteelHead appliance using interfaces provided in this package.
Once you have the base steelscript
package installed, getting started
is just one command away:
$ steel install --steelhead
For more details, see the complete documentation.
Documentation¶
Tutorials
Device modules
Library modules
License¶
This Riverbed SteelScript for Python documentation is provided “AS IS” and without any warranty or indemnification. Any sample code or scripts included in the documentation are licensed under the terms and conditions of the MIT License. See the License page for more information.
SteelScript Installation¶
SteelScript is provided as open source on GitHub (https://github.com/riverbed). Installation of SteelScript varies depending on the platform you are using.
Start with the specific instructions for Docker, Linux or Mac OS, or Windows for greater detail.
The quickest and easiest installation method is probably the Docker container.
Installing SteelScript SteelHead package requires executing the command steel install --steelhead
.
But it might take a few more steps, see SteelHead Installation Instructions for more details.
Python Compatibility Note¶
SteelScript requires Python 3, starting with version 2.0 libraries. The 1.8.x series of SteelScript packages are the last to support Python 2.x. The steelscript-netshark library was not upgraded beyond 1.8.x, as the NetShark product is now end-of-availability and end-of-support and existing users are recommended to transition to AppResponse.
GitHub master branches are now Python 3 only. Older versions compatible with Python 2 can still be downloaded from the Python Package Index (PyPI) by specifying a version lower than 2.0, like so: “pip install steelscript<2.0”
The SteelScript App Framework is not support beyond Python 2.
steel
Command¶
Once the base steelscript
package is installed, the steel
shell command
is availble. This command will normally be installed in your path so that you can
just run it from any directory.
The functions provided by steel
depend on what additional packages are installed,
as each package may define additional sub-commands.
The base command from just steelscript
provides the following subcommands:
steel about
- Show information about SteelScript packages installedsteel install
- Package installationsteel uninstall
- Package removalsteel rest
- Interactive shell for issuing REST commands
Most subcommands accept two options that control logging:
Logging Parameters:
--loglevel=LOGLEVEL log level: debug, warn, info, critical, error
--logfile=LOGFILE log file, use '-' for stdout
By default, the log level is set to info
and logging is sent to
the file ~/.steelscript/steel.log
.
In addition, many commands support detailed REST logging parameters:
REST Logging:
--rest-debug=REST_DEBUG
Log REST info (1=hdrs, 2=body)
--rest-body-lines=REST_BODY_LINES
Number of lines of request/response body to log
steel install
¶
The install
subcommand is used to install and upgrade pacakges. There
are a number of installation options available:
$ steel install -h
Usage: steel install [options]
Package installation
Options:
--version show program's version number and exit -h, --help show this help message and exit
Package installation options:
-U, --upgrade Upgrade packages that are already installed
-d DIR, --dir=DIR Directory to use for installation
-g, --github Install packages from github
--develop Combine with --gitlab to checkout packages
-p PACKAGES, --package=PACKAGES
Package to install (may specify more than once)
--appfwk Install all application framework packages
--pip-options=PIP_OPTIONS
Additional options to pass to pip
--steelhead Install steelhead packages
steel mkworkspace
¶
The mkworkspace
subcommand is used to create a workspace in which you can
develop and run Steelscript scripts. It will pull all scripts from
the /examples folder in your installed Steelscript packages. Once you create a
workspace you will notice it contains the Steelscript scripts, a README file
and collect_examples.py
. collect_examples.py
can be used to collect any new
examples from the Steelscript packages after the creation of the workspace.
There are a number of workspace options available:
$ steel mkworkspace -h
Usage: steel mkworkspace [options]
Create new workspace for running and creating Steelscript scripts
Options:
--version show program's version number and exit
-h, --help show this help message and exit
Make workspace options:
-d DIR, --dir=DIR Optional path for new workspace location
--git Initialize project as new git repo
steel uninstall
¶
The uninstall
subcommand is used to remove all steelscript pacakges.
This can be helpful as a debugging tool in order to quickly uninstall
SteelScript in order to reinstall or upgrade. This operation will
only affect the steelscript packages themselves, not any of their
installed dependencies (like requests or numpy or pandas). The available
options are:
$ steel uninstall -h
Usage: steel uninstall [options]
Uninstall all SteelScript packages
Options:
--version show program's version number and exit
-h, --help show this help message and exit
Package uninstall options:
--non-interactive Remove packages without prompting for input
steel rest
¶
The rest
subcommand starts an interactive shell for issuing REST
commands to a target server. This is best shown by an example by
connecting to the test server at httpbin.org. This
freely available test server echos back information sent to it and
provides a nice way to demonstrate the features of the REST shell:
The REST shell support history even across sessions, allowing you to
scroll back through previous commands via up/down arrows and editing.
Use Ctrl-R
to search backword for a command.
Connecting¶
A connection must first be established using the connect
command:
$ steel rest
REST Shell ('help' or 'quit' when done)
Current mode is 'json', use 'mode text' to switch to raw text
> connect http://httpbin.org/
http://httpbin.org/>
This creates a Python requests session to the target server. Basic
authentication is supported by adding -u <username> -p <password>
.
The prompt changes to show the server currently used for REST requests.
At any time a connection to a new server may be establshed using
connect
and the new server name.
Methods¶
The for basic HTTP methods are supported: GET, POST, PUT, DELETE. Each method takes the same parameters:
http://httpbin.org/> GET -h
Usage: GET <PATH> [options] ...
Perform an HTTP GET
Add URL parameters as <param>=<value>.
Add custom headers as <header>:<value>
Required Arguments:
PATH Full URL path
Options:
-h, --help show this help message and exit
Let’s try a simple GET of the path /get
. The full URL will be
the current server plus the absolute path http://httpbin.org/get
:
http://httpbin.org/> GET /get
Issuing GET
HTTP Status 200: 406 bytes
{
"origin": "208.70.199.4",
"headers": {
"X-Request-Id": "860f1a1c-642e-4aef-a673-aad538976475",
"Accept-Encoding": "gzip, deflate",
"Host": "httpbin.org",
"Accept": "application/json",
"User-Agent": "python-requests/2.3.0 CPython/2.7.3 Darwin/13.1.0",
"Connection": "close",
"Content-Type": "application/json"
},
"args": {},
"url": "http://httpbin.org/get"
}
Once the REST request is issued, any response from the server is
displayed. Note that the above response including "origin"
and
"headers"
is in the body of the response from httpbin.org – this
server echos back information about the request in response to support
testing. So the "headers"
shows the request headers that were
automatically added to the outgoing request type.
Notice that the content-type is application/json – this is the default encoding for outgoing requests. This applies primarily to PUT and POST which will prompt for a BODY:
http://httpbin.org/> POST /post
Provide body text, enter "." on a line by itself to finish
Request must be JSON, use double quotes for strings
{
"first": "Chris",
"last": "White"
}
.
The after entering that last line with a period “.” by it self, the REST shell issues the POST request and displays the response from the server:
Issuing POST
HTTP Status 200: 586 bytes
{
"files": {},
"origin": "208.70.199.4",
"form": {},
"url": "http://httpbin.org/post",
"args": {},
"headers": {
"Content-Length": "35",
"Accept-Encoding": "gzip, deflate",
"X-Request-Id": "36067711-b9a9-47b6-9f65-60202a1dffe7",
"Host": "httpbin.org",
"Accept": "application/json",
"User-Agent": "python-requests/2.3.0 CPython/2.7.3 Darwin/13.1.0",
"Connection": "close",
"Content-Type": "application/json"
},
"json": {
"last": "White",
"first": "Chris"
},
"data": "{\"last\": \"White\", \"first\": \"Chris\"}"
}
URL Parameters and Custom Headers¶
All methods support adding URL parameters and custom headers on the same line as the method:
http://httpbin.org/> GET /get x=1 y=2 X-Hdr:foo Y-Hdr:bar
The above will encode two URL parameters x
and y
and
will add two custom HTTP headers X-Hdr
and Y-Hdr
.
JSON vs Text modes¶
By default, the PUT/POST body is expected to be a JSON value.
If the target server instead requires raw text, this can be changed
by the mode
command:
http://httpbin.org/> POST /post
Provide body text, enter "." on a line by itself to finish
Any value allowed
Here! Here!
.
Issuing POST
HTTP Status 200: 475 bytes
{
"files": {},
"origin": "208.70.199.4",
"form": {},
"url": "http://httpbin.org/post",
"args": {},
"headers": {
"Content-Length": "29",
"Accept-Encoding": "gzip, deflate",
"X-Request-Id": "6d2076cc-0213-4d74-84fd-24e6c8a37112",
"Host": "httpbin.org",
"Accept": "*/*",
"User-Agent": "python-requests/2.3.0 CPython/2.7.3 Darwin/13.1.0",
"Connection": "close"
},
"json": null,
"data": "Any value allowed\nHere! Here!"
}
REST Logging¶
Often it is useful to see the full details of each REST request and
response. This is achieved using --rest-debug=<num>
and
--rest-body-lines=<num>
.
As a simple example, here’s the full tracing for POST /post
above
with full logging enabled:
$ steel rest --logfile - --rest-debug=2 --rest-body-lines=10000
2014-06-12 22:41:40,511 [INFO ] (steelscript.commands.steel) ======================================================================
2014-06-12 22:41:40,511 [INFO ] (steelscript.commands.steel) ==== Started logging: /Users/cwhite/env/ss/bin/steel rest --logfile - --rest-debug=2 --rest-body-lines=10000
REST Shell ('help' or 'quit' when done)
Current mode is 'json', use 'mode text' to switch to raw text
> connect http://httpbin.org/
2014-06-12 22:41:44,171 [INFO ] (steelscript.commands.rest) Command: connect http://httpbin.org/
http://httpbin.org/> POST /post
2014-06-12 22:41:47,970 [INFO ] (steelscript.commands.rest) Command: POST /post
Provide body text, enter "." on a line by itself to finish
Request must be JSON, use double quotes for strings
{
"last": "White",
"first": "Chris"
}
.
Issuing POST
2014-06-12 22:41:56,370 [INFO ] (REST) POST http://httpbin.org/post
2014-06-12 22:41:56,371 [INFO ] (REST) Extra request headers:
2014-06-12 22:41:56,371 [INFO ] (REST) ... Content-Type: application/json
2014-06-12 22:41:56,371 [INFO ] (REST) ... Accept: application/json
2014-06-12 22:41:56,371 [INFO ] (REST) Request body:
2014-06-12 22:41:56,371 [INFO ] (REST) ... {
2014-06-12 22:41:56,371 [INFO ] (REST) ... "last": "White",
2014-06-12 22:41:56,372 [INFO ] (REST) ... "first": "Chris"
2014-06-12 22:41:56,372 [INFO ] (REST) ... }
2014-06-12 22:41:56,393 [INFO ] (requests.packages.urllib3.connectionpool) Starting new HTTP connection (1): httpbin.org
2014-06-12 22:41:56,608 [INFO ] (REST) Request headers:
2014-06-12 22:41:56,608 [INFO ] (REST) ... Content-Length: 35
2014-06-12 22:41:56,608 [INFO ] (REST) ... Content-Type: application/json
2014-06-12 22:41:56,608 [INFO ] (REST) ... Accept-Encoding: gzip, deflate
2014-06-12 22:41:56,608 [INFO ] (REST) ... Accept: application/json
2014-06-12 22:41:56,609 [INFO ] (REST) ... User-Agent: python-requests/2.3.0 CPython/2.7.3 Darwin/13.1.0
2014-06-12 22:41:56,609 [INFO ] (REST) Response Status 200, 586 bytes
2014-06-12 22:41:56,609 [INFO ] (REST) Response headers:
2014-06-12 22:41:56,609 [INFO ] (REST) ... content-length: 586
2014-06-12 22:41:56,609 [INFO ] (REST) ... server: gunicorn/18.0
2014-06-12 22:41:56,609 [INFO ] (REST) ... connection: keep-alive
2014-06-12 22:41:56,609 [INFO ] (REST) ... date: Fri, 13 Jun 2014 02:41:56 GMT
2014-06-12 22:41:56,609 [INFO ] (REST) ... access-control-allow-origin: *
2014-06-12 22:41:56,609 [INFO ] (REST) ... content-type: application/json
2014-06-12 22:41:56,623 [INFO ] (REST) Response body:
2014-06-12 22:41:56,623 [INFO ] (REST) ... {
2014-06-12 22:41:56,623 [INFO ] (REST) ... "files": {},
2014-06-12 22:41:56,623 [INFO ] (REST) ... "origin": "72.93.33.239",
2014-06-12 22:41:56,623 [INFO ] (REST) ... "form": {},
2014-06-12 22:41:56,623 [INFO ] (REST) ... "url": "http://httpbin.org/post",
2014-06-12 22:41:56,623 [INFO ] (REST) ... "args": {},
2014-06-12 22:41:56,623 [INFO ] (REST) ... "headers": {
2014-06-12 22:41:56,623 [INFO ] (REST) ... "Content-Length": "35",
2014-06-12 22:41:56,623 [INFO ] (REST) ... "Accept-Encoding": "gzip, deflate",
2014-06-12 22:41:56,624 [INFO ] (REST) ... "X-Request-Id": "aad9bb28-eaa1-4302-a248-a24bb4ea671f",
2014-06-12 22:41:56,624 [INFO ] (REST) ... "Host": "httpbin.org",
2014-06-12 22:41:56,624 [INFO ] (REST) ... "Accept": "application/json",
2014-06-12 22:41:56,624 [INFO ] (REST) ... "User-Agent": "python-requests/2.3.0 CPython/2.7.3 Darwin/13.1.0",
2014-06-12 22:41:56,624 [INFO ] (REST) ... "Connection": "close",
2014-06-12 22:41:56,624 [INFO ] (REST) ... "Content-Type": "application/json"
2014-06-12 22:41:56,624 [INFO ] (REST) ... },
2014-06-12 22:41:56,624 [INFO ] (REST) ... "json": {
2014-06-12 22:41:56,624 [INFO ] (REST) ... "last": "White",
2014-06-12 22:41:56,624 [INFO ] (REST) ... "first": "Chris"
2014-06-12 22:41:56,624 [INFO ] (REST) ... },
2014-06-12 22:41:56,624 [INFO ] (REST) ... "data": "{\"last\": \"White\", \"first\": \"Chris\"}"
2014-06-12 22:41:56,624 [INFO ] (REST) ... }
HTTP Status 200: 586 bytes
{
"files": {},
"origin": "72.93.33.239",
"form": {},
"url": "http://httpbin.org/post",
"args": {},
"headers": {
"Content-Length": "35",
"Accept-Encoding": "gzip, deflate",
"X-Request-Id": "aad9bb28-eaa1-4302-a248-a24bb4ea671f",
"Host": "httpbin.org",
"Accept": "application/json",
"User-Agent": "python-requests/2.3.0 CPython/2.7.3 Darwin/13.1.0",
"Connection": "close",
"Content-Type": "application/json"
},
"json": {
"last": "White",
"first": "Chris"
},
"data": "{\"last\": \"White\", \"first\": \"Chris\"}"
}
http://httpbin.org/>
SteelScript Common¶
SteelScript Common¶
This module provides many utility functions and other classes used by the various other SteelScript components.
Documentation available in this module:
Generic helper classes and functions:
Base classes and functionality specific to SteelScript:
steelscript.common.timeutils
¶
This module contains a number of utilities for working with dates and times, in conjunction with the python datetime module.
Timezone Handling¶
-
steelscript.common.timeutils.
ensure_timezone
(dt)¶ Return a datetime object that corresponds to dt but that always has timezone info.
If dt already has timezone info, then it is simply returned. If dt does not have timezone info, then the local time zone is assumed.
-
steelscript.common.timeutils.
force_to_utc
(dt)¶ Return a datetime object that corresponds to dt but in UTC rather than local time.
If dt includes timezone info, then this routine simply converts from the given timezone to UTC. If dt does not include timezone info, then it is assumed to be in local time, which is then converted to UTC.
Conversions¶
Devices often represent time as seconds (or microseconds or
nanoseconds) since the Unix epoch (January 1, 1970). The following
functions are useful for converting to and from native Python
datetime.datetime
objects:
-
steelscript.common.timeutils.
datetime_to_seconds
(dt)¶ Return the number of seconds since the Unix epoch for the datetime object dt.
-
steelscript.common.timeutils.
datetime_to_microseconds
(dt)¶ Return the number of microseconds since the Unix epoch for the datetime object dt.
-
steelscript.common.timeutils.
datetime_to_nanoseconds
(dt)¶ Return the number of nanoseconds since the Unix epoch for the datetime object dt.
-
steelscript.common.timeutils.
usec_string_to_datetime
(s)¶ Convert the string s which represents a time in microseconds since the Unix epoch to a datetime object.
-
steelscript.common.timeutils.
nsec_to_datetime
(ns)¶ Convert the value ns which represents a time in nanoseconds since the Unix epoch (either as an integer or a string) to a datetime object.
-
steelscript.common.timeutils.
usec_string_to_timedelta
(s)¶ Convert the string s which represents a number of microseconds to a timedelta object.
-
steelscript.common.timeutils.
timedelta_total_seconds
(td)¶ Handle backwards compatibility for timedelta.total_seconds.
Parsing dates and times¶
-
class
steelscript.common.timeutils.
TimeParser
¶ Instances of this class parse strings representing dates and/or times into python datetime.datetime objects.
This class is capable of parsing a variety of different formats. On the first call, the method parse() may take some time, as it tries a series of pre-defined formats one after another. After successfully parsing a string, the parser object remembers the format that was used so subsequent calls with identically formatted strings are as efficient as the underlying method datetime.strptime.
Parsing time ranges¶
-
steelscript.common.timeutils.
parse_timedelta
(s)¶ Parse the string s representing some duration of time (e.g., “3 seconds” or “1 week”) and return a datetime.timedelta object representing that length of time.
If the string cannot be parsed, raises ValueError.
-
steelscript.common.timeutils.
parse_range
(s, begin_monday=False)¶ Parse the string s representing a range of times (e.g., “12:00 PM to 1:00 PM” or “last 2 weeks”).
Upon success returns a pair of datetime.datetime objects representing the beginning and end of the time range. If the string cannot be parsed, raises ValueError.
steelscript.common.service
¶
This module defines the Service class and associated authentication classes. The Service class is not instantiated directly, but is instead subclassed to implement handlers for particular REST namespaces.
For example, the NetShark is based on Service using the “netshark” namespace, and will provide the necessary methods to interface with the REST resources available within that namespace.
If a device or appliance implements multiple namespaces, each namespace will be exposed by a separate child class. The SteelCentral NetExpress product implements both the “netprofiler” and “netshark” namespaces. These will be exposed via NetShark and NetProfiler classes respectively, both based on the the Service class. A script that interacts with both namespaces must instantiate two separate objects.
Service
Objects¶
-
class
steelscript.common.service.
Service
(service, host=None, port=None, auth=None, verify_ssl=False, versions=None)¶ This class is the main interface to interact with a device via REST and provides the following functionality:
Connection management
Resource requests and responses
Authentication
“common” resources
A connection is established as soon as the an instance of this object is created. Requests can be made via the Service.conn property.
-
__init__
(service, host=None, port=None, auth=None, verify_ssl=False, versions=None)¶ Establish a connection to the named host.
host is the name or IP address of the device to connect to
- port is the TCP port to use for the connection. This may be either
a single port or a list of ports. If left unset, the port will automatically be determined.
- auth defines the authentication method and credentials to use
to access the device. See UserAuth and OAuth. If set to None, connection is not authenticated.
- verify_ssl when set to True will only allow verified SSL certificates
on any connections, False will not verify certs (useful for self-signed certs on many test systems)
- versions is the API versions that the caller can use.
if unspecified, this will use the latest version supported by both this implementation and service requested. This does not apply to the “common” resource requests.
-
authenticate
(auth)¶ Authenticate with device using the defined authentication method. This sets up the appropriate authentication headers to access restricted resources.
auth must be an instance of either UserAuth or OAuth.
-
check_api_versions
(api_versions)¶ Check that the server supports the given API versions.
-
logout
()¶ End the authenticated session with the device.
-
ping
()¶ Ping the service. On failure, this raises an exception
-
reauthenticate
()¶ Retry the authentication method
Authentication¶
Most REST resource calls require authentication. Devices will support one or more authentication methods. The following methods may be supported:
Auth.OAUTH
- OAuth 2.0 based authentication using an access code. The access code is used to retrieve an access token which is used in subsequent REST calls.Auth.COOKIE
- session based authentication via HTTP Cookies. The initial authentication uses username and password. On success, an HTTP Cookie is set and used for subsequent REST calls.Auth.BASIC
- simple username/password based HTTP Basic authentication.
When a Service object is created, the user may either pass an authentication
object to the constructor, or later passed to the Service.authenticate()
method.
UserAuth
Objects¶
-
class
steelscript.common.service.
UserAuth
(username, password, method=None)¶ This class is used for both Basic and Cookie based authentication which rely on username and password.
-
__init__
(username, password, method=None)¶ Define an authentication method using username and password. By default this will be used for both Basic as well as Cookie based authentication methods (whichever is supported by the target). Authentication can be restricted by setting the method to either Auth.BASIC or Auth.COOKIE.
-
OAuth
Objects¶
steelscript.common.connection
¶
Connection
Objects¶
-
class
steelscript.common.connection.
Connection
(hostname, auth=None, port=None, verify=True, reauthenticate_handler=None)¶ Handle authentication and communication to remote machines.
-
__init__
(hostname, auth=None, port=None, verify=True, reauthenticate_handler=None)¶ Initialize new connection and setup authentication
hostname - include protocol, e.g. “https://host.com” auth - authentication object, see below port - optional port to use for connection verify - require SSL certificate validation.
Authentication: For simple basic auth, passing a tuple of (user, pass) is sufficient as a shortcut to an instance of HTTPBasicAuth. This auth method will trigger a check to ensure the protocol is using SSL to connect (though cert verification may still be turned off to avoid errors with self-signed certs).
OAuth2 will require the
requests-oauthlib
package and an instance of the OAuth2Session object.netrc config files will be checked if auth is left as None. If no authentication is provided for the hostname in the netrc file, or no file exists, an error will be raised when trying to connect.
-
class
JsonEncoder
(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)¶ Handle more object types if first encoding doesn’t work.
-
default
(obj)¶ Implement this method in a subclass such that it returns a serializable object for
o
, or calls the base implementation (to raise aTypeError
).For example, to support arbitrary iterators, you could implement default like this:
def default(self, o): try: iterable = iter(o) except TypeError: pass else: return list(iterable) # Let the base class default method raise the TypeError return JSONEncoder.default(self, o)
-
-
download
(url, path=None, overwrite=False, method='GET', extra_headers=None, params=None)¶ Download a file from a remote URI and save it to a local path.
url is the url of the file to download.
path is an optional path on the local filesystem to save the downloaded file. It can be:
a complete path
a directory
In the first case the file will have the specified name and extension. In the second case the filename will be retrieved by the ‘Content-Disposition’ HTTP header. If a path cannot be determined, a ValueError is raised.
- overwrite if True will save the downloaded file to path no matter
if the file already exists.
method is the HTTP method used for the request.
extra_headers is a dictionary of headers to use for the request.
params is a dictionary of parameters for the request.
-
get_url
(path)¶ Returns a fully qualified URL given a path.
-
json_request
(method, path, body=None, params=None, extra_headers=None, raw_response=False)¶ Send a JSON request and receive JSON response.
-
upload
(path, data, method='POST', params=None, extra_headers=None)¶ Upload raw data to the given URL path with the given content type.
data may be either a string or a python file object.
- extra_headers is a dictionary of additional HTTP headers to send
with the request (e.g. Content-Type, Content-Disposition)
- params is a dictionary of URL parameters to attach to the request.
The keys and values will be urlencoded.
- method defaults to “POST”, but can be overridden if the API requires
another method such as “PUT” to be used instead.
Returns location information if resource has been created, otherwise the response body (if any).
-
upload_file
(path, files, body=None, params=None, extra_headers=None, file_headers=None, field_name='file', raw_response=False)¶ Executes a POST to upload a file or files.
- Parameters
path – The full or relative URL of the file upload API
files – Can be a string that is the full path to a file to be uploaded OR it can be a tuple/list of strings that are each the full path to a file to be uploaded.
body – Optional body. If present must be a dictionary.
params – optional URL params
extra_headers – Optional headers
file_headers – Optional headers to include with the multipart file data. Default is {‘Expires’: ‘0’}. Pass in an empty dict object if you would not like to include any file_headers in the multipart data.
field_name – The name of the form field on the destination that will receive the posted multipart data. Default is ‘file’
raw_response – False (defualt) results in the function returning only the decoded JSON response present in the response body. If set to True then the funciton will return a tuple of the decoded JSON body and the full response object. Set to True if you want to inspect the result code or response headers.
- Returns
See ‘raw_response’ for details on the returned data.
-
urlencoded_request
(method, path, body=None, params=None, extra_headers=None, raw_response=False)¶ Send a request with url encoded parameters in body
-
xml_request
(method, path, body=None, params=None, extra_headers=None, raw_response=False)¶ Send an XML request to the host.
The Content-Type and Accept headers are set to text/xml. In addition, any response will be XML-decoded as an xml.etree.ElementTree. The body is assumed to be an XML encoded text string and is inserted into the HTTP payload as-is.
-
SteelScript NetProfiler¶
SteelScript NetProfiler¶
All interaction with a NetProfiler requires an instance of
NetProfiler
This class establishes a connection to the NetProfiler. There are
dedicated class for each different type of report.
Documentation available in this module:
SteelScript NetProfiler Tutorial¶
This tutorial will walk through the main components of the SteelScript interfaces for Riverbed SteelCentral NetProfiler. It is assumed that you have a basic understanding of the Python programming language.
The tutorial has been organized so you can follow it sequentially.
Throughout the examples, you will be expected to fill in details
specific to your environment. These will be called out using a dollar
sign $<name>
– for example $host
indicates you should fill in
the host name or IP address of a NetProfiler appliance.
Whenever you see >>>
, this indicates an interactive session using
the Python shell. The command that you are expected to type follows
the >>>
. The result of the command follows. Any lines with a
#
are just comments to describe what is happening. In many cases
the exact output will depend on your environment, so it may not match
precisely what you see in this tutorial.
Background¶
NetProfiler provides centralized reporting and analysis of the data collected by other SteelCentral appliances (i.e., Flow Gateway, and NetShark) and SteelHead products on a single user interface. SteelScript for NetProfiler makes this this wealth of data easily accessible via Python.
NetProfiler Objects¶
Interacting with a NetProfiler leverages two key classes:
NetProfiler
- provides the primary interface to the appliance, handling initialization, setup, and communication via REST API calls.Report
- talks through the NetProfiler to create new reports and pull data from existing reports.
In most cases you will not use Report directly – your scripts will
use a more helpful object tailored to the desired report, such as a
TrafficSummaryReport
or a
TrafficOverallTimeSeriesReport
. We’ll cover those shortly.
Outside of handling all the communication back and forth,
NetProfiler
also handles all of
the different report columns that could be desired. It provides a
helpful interface to offer up available columns by report type, and
ensures that any chosen columns are in fact appropriate.
With that brief overview, let’s get started.
Startup¶
As with any Python code, the first step is to import the module(s) we
intend to use. The SteelScript code for working with NetProfiler
appliances resides in a module called
steelscript.netprofiler.core
. The main class in this module
is NetProfiler
. This object
represents a connection to a NetProfiler appliance.
To start, start python from the shell or command line:
$ python
Python 3.8.3
Type "help", "copyright", "credits" or "license" for more information.
>>>
Once in the python shell, let’s create a NetProfiler object:
>>> from steelscript.netprofiler.core import NetProfiler
>>> from steelscript.common.service import UserAuth
>>> p = NetProfiler('$hostname', auth=UserAuth('$username', '$password'))
Replace the first argument $hostname
with the hostname or IP
address of the NetProfiler appliance. The second argument is an auth
parameter and identifies the authentication method to use – in this
case, simple username/password is used. OAuth 2.0 is supported as
well, but we will focus on basic authentication for this tutorial.
As soon as the NetProfiler
object is created, a connection is
established to the appliance, the authentication credentials are
validated, and hierarchy of available columns is loaded. If the
username and password are not correct, you will immediately see an
exception. Also, if this is the first time initializing a
NetProfiler
object, there will be a short delay while all of the
columns are fetched from the appliance and cached locally.
The p
object is the basis for all communication with the
NetProfiler appliance. We can get some basic version information by
simply looking at the ‘version’ attribute:
>>> print p.version
'10.1 (release 20130204_1200)'
Before moving on, exit the python interactive shell:
>>> [Ctrl-D]
$
Generating Reports¶
Reports are the mechanisim to extract all the myriad of data from
NetProfiler
into any format desired. We will create a short script
that provides a command-line interface to generate reports on the fly.
Create a new file in a working directory of your choice, call it myreport.py
,
and insert the following lines:
import pprint
from steelscript.netprofiler.core import NetProfiler
from steelscript.common.service import UserAuth
from steelscript.netprofiler.core.filters import TimeFilter
from steelscript.netprofiler.core.report import TrafficSummaryReport
# connection information
username = '$username'
password = '$password'
auth = UserAuth(username, password)
host = '$host'
# create a new NetProfiler instance
p = NetProfiler(host, auth=auth)
# setup basic info for our report
columns = [p.columns.key.host_ip,
p.columns.value.avg_bytes,
p.columns.value.network_rtt]
sort_column = p.columns.value.avg_bytes
timefilter = TimeFilter.parse_range("last 5 m")
# initialize a new report, and run it
report = TrafficSummaryReport(p)
report.run('hos', columns, timefilter=timefilter, sort_col=sort_column)
# grab the data, and legend (it should be what we passed in for most cases)
data = report.get_data()
legend = report.get_legend()
# once we have what we need, delete the report from the NetProfiler
report.delete()
# print out the top ten hosts!
pprint.pprint(data[:10])
Be sure to fill in appropriate values for $host
, $username
and $password
.
Run this script as follows and you should see something like the following:
$ python myreport.py
[['10.100.6.12', 1733552.81667, ''],
['10.99.18.154', 1027017.35, 0.124],
['10.100.5.12', 814550.3, ''],
['10.100.5.13', 707320.527778, ''],
['10.100.6.14', 691441.777778, ''],
['10.100.6.10', 525593.25, ''],
['10.100.120.108', 455330.638889, ''],
['10.100.5.11', 443483.577778, ''],
['10.100.6.11', 385050.85, ''],
['10.100.201.33', 371349.105556, 0.046]]
We’ve created our first report! Let’s take a closer look at what we just did.
import pprint
from steelscript.netprofiler.core import NetProfiler
from steelscript.common.service import UserAuth
from steelscript.netprofiler.core.filters import TimeFilter
from steelscript.netprofiler.core.report import TrafficSummaryReport
These first few lines import our SteelScript modules and prepare them
for use in the rest of the script. The Python Style guide (PEP8) indicates that common
or built-in modules like pprint
are imported first, and custom
modules (like SteelScript) follow after.
# connection information
username = '$username'
password = '$password'
auth = UserAuth(username, password)
host = '$host'
These are our login credentials. We have them hard-coded into the script for an example here, but we will show how to have these supplied on the command line shortly.
# create a new NetProfiler instance
p = NetProfiler(host, auth=auth)
# setup basic info for our report
columns = [p.columns.key.host_ip,
p.columns.value.avg_bytes,
p.columns.value.network_rtt]
sort_column = p.columns.value.avg_bytes
timefilter = TimeFilter.parse_range("last 5 m")
# initialize a new report, and run it
report = TrafficSummaryReport(p)
report.run('hos', columns, timefilter=timefilter, sort_col=sort_column)
Now things get interesting. After initializing a new NetProfiler instance, we define some of the settings we want to use in our report:
columns
is a list of column types we want to use in our reportsort_column
indicates which column NetProfiler should use to sort ontimefilter
provides a time range for what time period the reportshould be limited to
Next, a new report instance is created, and the variables we just defined are used to generate a report.
# grab the data, and legend (it should be what we passed in for most cases)
data = report.get_data()
legend = report.get_legend()
# once we have what we need, delete the report from the NetProfiler
report.delete()
# print out the top ten hosts!
pprint.pprint(data[:10])
Here, the comments pretty well walk through what is happening. Deleting reports helps keep things tidy, but doesn’t cause harm if they are left around. After a period of time the appliance will cleanup any leftover reports after 24 hours.
Finally, since we included a column to sort on in our report request, we can just limit the output to the first ten items to get the top ten.
Reporting Columns¶
We chose only a small subset of the available columns for our example script.
We could include any columns applicable for this report type. To help identify
which columns are available, we could start up a python console and try some of
the commands discussed in the Profile Columns
section, or we could use the helper command steel netprofiler columns
.
The steel
sommand should have been installed in one of your local
bin
directories (Scripts
on Windows). Try the following
command to see if its on your path:
$ which steel
If that doesn’t return a path, then you will need to add the directory where has been installed to your shell’s system path.
Now that you are setup, let’s find some columns.
In our example, we glossed over the specific realm, centricity, and groupby that was selected. For a TrafficSummaryReport, those three items could be as follows:
Parameter |
Possible values |
---|---|
|
|
|
|
|
any type except |
Enter the following:
$ steel netprofiler columns -h
Usage: steel netprofiler columns HOST [options] ...
List columns available for NetProfiler reports
Required Arguments:
HOST NetProfiler hostname or IP address
Options:
-h, --help show this help message and exit
[...text continues...]
And you will see all of the available options to the script. One thing you will see are options for host, username, and password. Where we had those hardcoded in our example, now we pass them as options to the script.
$ steel netprofiler columns $hostname -u $username -p $password
This will just execute and print nothing out if it was able to successfully connect. Now, let’s add our triplet information:
$ steel netprofiler columns $hostname -u $username -p $password -r traffic_summary
-c hos -g host --list-columns
Key Columns Label ID
-------------------------------------------------
group_name Group 23
[...text continues...]
Value Columns Label ID
----------------------------------------------------------------------------
avg_bytes Avg Bytes/s 33
avg_bytes_app Avg App Bytes/s 504
[...text continues...]
The available key and value columns will be presented. If additional columns were desired for your report, select from this list.
We have chosen host
as our groupby option, but to get a full list of what is available,
use the ‘–list-groupbys’ option:
$ steel netprofiler columns $hostname -u $username -p $password --list-groupbys
GroupBy Id
------------------------------------
host_pair hop
ip_mac_pair ipp
port_group pgr
[...text continues...]
Note that the correct value to pass in the steel netprofiler
columns
script is the groupby name, not the Id.
Once you have found the set of columns you are interested in, you will now have a means of including them in your report request. The following syntax would be one way to reference them:
columns = [p.columns.key.host_ip,
p.columns.value.avg_bytes,
p.columns.value.network_rtt]
Assuming p
is a NetProfiler instance, this would be one format to create
a list of key and value columns. Keys are named p.columns.key.<colname>
and
values are named p.columns.value.<colname>
.
Additional discussion on columns can be found here.
Extending the Example¶
As a last item to help get started with your own scripts, we will extend our example with two helpful features: command-line options and table outputs.
Rather than show how to update your existing example script, we will post the new script below, then walk through key differences that add the features we are looking for.
#!/usr/bin/env python
import optparse
from steelscript.netprofiler.core.filters import TimeFilter
from steelscript.netprofiler.core.report import TrafficSummaryReport
from steelscript.netprofiler.core.app import NetProfilerApp
from steelscript.common.datautils import Formatter
class ExampleApp(NetProfilerApp):
def add_options(self, parser):
super(ExampleApp, self).add_options(parser)
group = optparse.OptionGroup(parser, "Example Options")
group.add_option('-r', '--timerange', dest='timerange', default=None,
help='Time range to limit report to, e.g. "last 5 m"')
parser.add_option_group(group)
def main(self):
p = self.netprofiler
report = TrafficSummaryReport(p)
columns = [p.columns.key.host_ip,
p.columns.value.avg_bytes,
p.columns.value.network_rtt]
sort_column = p.columns.value.avg_bytes
timefilter = TimeFilter.parse_range(self.options.timerange)
report.run('hos', columns, timefilter=timefilter, sort_col=sort_column)
data = report.get_data()
legend = report.get_legend()
report.delete()
header = [c.key for c in columns]
Formatter.print_table(data[:10], header)
ExampleApp().run()
Copy that code into a new file, and run it with a timerange option,
and you will find the same base set of options used for steel
netprofiler columns
are now included in this script. Primarily,
hostname
, username
, password
are now all items to be
passed to the script.
For example:
> python myreport2.py $hosthame -u $username -p $password -r "last 10 min"
host_ip avg_bytes network_rtt
--------------------------------------------------
10.100.6.12 683349.295833
10.100.5.13 653938.525
10.100.120.108 572001.791667
10.100.5.11 438921.75
10.100.201.30 405558.216667 0.051
10.100.5.12 398773.9875
10.100.201.20 359039.758333 0.153
10.100.201.21 306396.929167 0.154
10.100.202.2 301756.991667 0.011
10.100.201.32 293926.695833 0.064
We also get a nicely formatted table, too!
First we needed to import some new items:
#!/usr/bin/env python
from steelscript.netprofiler.core.filters import TimeFilter
from steelscript.netprofiler.core.report import TrafficSummaryReport
from steelscript.netprofiler.core.app import NetProfilerApp
from steelscript.common.datautils import Formatter
import optparse
That bit at the top is called a shebang, it tells the system that it should
execute this script using the program after the ‘#!’. We are also importing
NetProfilerApp
and Formatter
classes to help with our new updates. The
built-in library optparse
is used to parse command-line options.
class ExampleApp(NetProfilerApp):
def add_options(self, parser):
group = optparse.OptionGroup(parser, "Example Options")
group.add_option('-r', '--timerange', dest='timerange', default=None,
help='Time range to limit report to, e.g. "last 5 m"')
parser.add_option_group(group)
This section begins the definition of a new class, which inherits from the
class NetProfilerApp. This is some of the magic of object-oriented programming,
a lot of functionality is defined as part of NetProfilerApp, including the
basics of authentication, and setting up a NetProfiler instance, and we get all
of that for free, just by inheriting from it. In fact, we go beyond that,
and extend its functionality by defining the function add_options
. Here,
we add a new option to pass in a timerange on the commandline.
def main(self):
p = self.netprofiler
report = TrafficSummaryReport(p)
columns = [p.columns.key.host_ip,
p.columns.value.avg_bytes,
p.columns.value.network_rtt]
sort_column = p.columns.value.avg_bytes
timefilter = TimeFilter.parse_range(self.options.timerange)
report.run('hos', columns, timefilter=timefilter, sort_col=sort_column)
data = report.get_data()
legend = report.get_legend()
report.delete()
header = [c.key for c in columns]
Formatter.print_table(data[:10], header)
ExampleApp().run()
This is the main part of the script, and remains mostly unchanged from our previous version. Rather than create the NetProfiler instance directly, that is now being done for us as part of NetProfilerApp. We just need to reference it as shown.
The timefilter option is now being pulled from the command-line,
self.options.timerange
, so we have one additional item that can be varied
from run to run.
Next, we have to run some small magic to pull out the key information
from each of the column objects. The expression in the brackets for
the header assignment is called a list comprehension.
Think of it like a condensed for-loop. Once we have a header, we pass
that along with our data to the Formatter.print_table
function,
and that will print out our data nicely formatted into columns.
The last line calls the main run-loop as defined in the NetProfilerApp class, and the rest should function as before.
Profiler Columns and Groupbys¶
One of the key pieces of information NetProfiler
keeps track of are all
of the different Column types, and under what context they are appropriate.
For instance, when running a Traffic Summary report, then time
is not a
valid column of data since this report type organizes its information in other
ways.
Column types fall into two categories: keys and values. Keys are column types that represent the primary organization/grouping of the data, and values are all of the different calculations that can be made.
The contexts for columns that are available are defined by three values: realm, centricity, and groupby. A breakdown of how these three inter-relate is shown in the following table:
realm |
centricity |
groupby |
---|---|---|
traffic_summary |
hos,int |
all (except thu) |
traffic_overall_time_series |
hos,int |
tim |
traffic_flow_list |
hos |
hos |
identity_list |
hos |
thu |
As SteelScript develops further, this table and the available permutations will expand.
Let’s take a look at how these work a little more closely. Startup a new instance of your Python interpreter, similar to before:
$ python
Python 2.7.3 (default, Apr 19 2012, 00:55:09)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from steelscript.netprofiler.core import NetProfiler
>>> from steelscript.common.service import UserAuth
>>> p = NetProfiler('$hostname', auth=UserAuth('$username', '$password'))
Now, lets investigate which columns are available for a specific type of report:
>>> realms = ['traffic_summary']
>>> centricities = ['hos']
>>> groupbys = ['hos']
>>> columns = p.search_columns(realms=realms, centricities=centricities, groupbys=groupbys)
Here we have setup three local variables, and passed them as arguments
to the search_columns()
method on our netprofiler
object.
Note the brackets around each of the definitions we made, those mean
we created a list
object for all three variables. In this case,
the list contains only a single object, the string.
Let’s take a look at what that method returned:
>>> len(columns)
146
So, a total of 146 columns can be chosen for a report with those three filters! Note your specific number may vary here, depending on the specific version of Profiler you are running.
>>> columns[:2]
[<Column(cid=31, key=total_pkts, iskey=False label=Total Packets)>,
<Column(cid=427, key=in_avg_conns_rsts, iskey=False label=Avg Resets/s (Rx))>]
This command uses slicing
to show only the first two elements of the list. Notice these are
objects themselves, with quite a bit of information associated with
each one. These objects are used extensively within netprofiler
, but
the main thing to keep in mind is that you can refer to columns by
their text value (the ‘key’ attribute), by index value (the ‘cid’
attribute in the example above), or by the actual object itself.
Another way to access one of the columns, is through the netprofiler
object as an attribute:
>>> print p.columns.value.total_pkts
<Column(cid=31, key=total_pkts, iskey=False label=Total Packets)>
>>> print p.columns.key.time
<Column(cid=98, key=time, iskey=True label=Time)>
To see the complete list of all column keys you could enter the following:
>>> print p.columns.keys
[...long list of objects...]
>>> print p.columns.values
[...long list of objects...]
NetProfiler and Reporting¶
The NetProfiler package offers a set of interfaces to control and work with a SteelCentral NetProfiler appliance.
NetProfiler
Objects¶
-
class
steelscript.netprofiler.core.netprofiler.
NetProfiler
(host, port=None, auth=None)¶ The NetProfiler class is the main interface to interact with a NetProfiler appliance. Primarily this provides an interface to reporting.
-
__init__
(host, port=None, auth=None)¶ Establishes a connection to a NetProfiler appliance.
- Parameters
host (str) – name or IP address of the NetProfiler to connect to
port (int) – TCP port on which the NetProfiler appliance listens. If this parameter is not specified, the function will try to automatically determine the port.
auth – defines the authentication method and credentials to use to access the NetProfiler. It should be an instance of
UserAuth
orOAuth
force_version (str) – API version to use when communicating. if unspecified, this will use the latest version supported by both this implementation and the NetProfiler appliance.
See the base
Service
class for more information about additional functionality supported.
-
get_columns
(columns, groupby=None, strict=True)¶ Return valid Column objects for list of columns
- Parameters
columns (list) – list of strings, Column objects, or JSON dicts defining a column
groupby (str) – will optionally ensure that the selected columns are valid for the given groupby
strict (bool) – If True (default), will validate input against known Columns or create ephemeral columns for dynamic reports. If False, will avoid validation and process input as given. Used in some template or MultiQuery scenarios where the columns aren’t specific to a known realm/groupby pairing.
Note that this function may be incomplete for any given groupby.
-
get_columns_by_ids
(ids)¶ Return Column objects that have ids in list of strings ids.
- Parameters
ids (list) – list of integer ids
-
logout
()¶ Issue logout command to netprofiler machine.
-
search_columns
(realms=None, centricities=None, groupbys=None)¶ Identify columns given one or more values for the triplet.
- Parameters
realms (list) – list of strings
centricities (list) – list of strings
groupbys (list) – list of strings
Results will be based on the following relationship table:
realm
centricity
groupby
traffic_summary
hos,int
all (except thu)
traffic_overall_time_series
hos,int
tim
traffic_flow_list
hos
hos
identity_list
hos
thu
-
property
version
¶ Returns the software version of the NetProfiler
-
Report
Objects¶
-
class
steelscript.netprofiler.core.report.
Report
(profiler)¶ Base class for all NetProfiler reports.
This class is normally not used directly, but instead via subclasses
SingleQueryReport
andMultiQueryReport
.-
__init__
(profiler)¶ Initialize a report object.
A report object is bound to an instance of a NetProfiler at creation.
-
delete
()¶ Issue a call to NetProfiler delete this report.
-
get_data
(index=0, columns=None, limit=None)¶ Retrieve data for this report.
If columns is specified, restrict the data to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_iterdata
(index=0, columns=None, limit=None)¶ Retrieve iterator for the result data.
If columns is specified, restrict the legend to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_legend
(index=0, columns=None)¶ Return legend describing the columns in this report.
If columns is specified, restrict the legend to the list of requested columns.
-
get_query_by_index
(index=0)¶ Returns the query_id by specifying the index, defaults to 0.
-
get_totals
(index=0, columns=None)¶ Retrieve the totals for this report.
If columns is specified, restrict the totals to the list of requested columns.
-
run
(template_id, timefilter=None, resolution='auto', query=None, trafficexpr=None, data_filter=None, sync=True, custom_criteria=None)¶ Create the report and begin running the report on NetProfiler.
If the sync option is True, periodically poll until the report is complete, otherwise return immediately.
- Parameters
template_id (int) – numeric id of the template to use for the report
timefilter – range of time to query, instance of
TimeFilter
resolution (str) – data resolution, such as (1min, 15min, etc.), defaults to ‘auto’
query (str) – query object containing criteria
trafficexpr – instance of
TrafficFilter
data_filter (str) – deprecated filter to run against report data
sync (bool) – if True, poll for status until the report is complete
-
status
()¶ Query for the status of report. If the report has not been run, this returns None.
The return value is a dict containing:
status indicating completed when finished
percent indicating the percentage complete (0-100)
remaining_seconds is an estimate of the time left until complete
-
wait_for_complete
(interval=1, timeout=600)¶ Periodically checks report status and returns when 100% complete.
-
SingleQueryReport
Objects¶
-
class
steelscript.netprofiler.core.report.
SingleQueryReport
(profiler)¶ Bases:
steelscript.netprofiler.core.report.Report
Base class for NetProfiler REST API reports.
This class is not normally instantiated directly. See child classes such as
TrafficSummaryReport
.-
__init__
(profiler)¶ Initialize a report object.
A report object is bound to an instance of a NetProfiler at creation.
-
get_data
(columns=None, limit=None)¶ Retrieve data for this report.
If columns is specified, restrict the data to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_iterdata
(columns=None, limit=None)¶ Retrieve iterator for the result data.
If columns is specified, restrict the legend to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_legend
(columns=None)¶ Return legend describing the columns in this report.
If columns is specified, restrict the legend to the list of requested columns.
-
run
(realm, groupby='hos', columns=None, sort_col=None, timefilter=None, trafficexpr=None, host_group_type='ByLocation', resolution='auto', centricity='hos', area=None, data_filter=None, sync=True, query_columns_groupby=None, query_columns=None, limit=None, custom_criteria=None)¶ - Parameters
realm (str) – type of query, this is automatically set by subclasses
groupby (str) – sets the way in which data should be grouped (use netprofiler.groupby.*)
columns (list) – list of key and value columns to retrieve (use netprofiler.columns.*)
sort_col –
Column
reference to sort bytimefilter – range of time to query, instance of
TimeFilter
trafficexpr – instance of
TrafficFilter
host_group_type (str) – sets the host group type to use when the groupby is related to groups (such as ‘group’ or ‘peer_group’).
resolution (str) – data resolution, such as (1min, 15min, etc.), defaults to ‘auto’
centricity ('hos' or 'int') – ‘hos’ for host-based counts, or ‘int’ for interface based counts, only affects directional columns
area (str) – sets the appropriate scope for the report
data_filter (str) – deprecated filter to run against report data
sync (bool) – if True, poll for status until the report is complete
query_columns_groupby (list) – the groupby for time columns
query_columns (list) – list of unique values associated with query_columns_groupby
limit (integer) – Upper limit of rows of the result data. NetProfiler will return by default a maximum of 10,000 rows, but with this argument that limit can be raised up to ‘1000000’, if needed.
-
TrafficSummaryReport
Objects¶
-
class
steelscript.netprofiler.core.report.
TrafficSummaryReport
(profiler)¶ Bases:
steelscript.netprofiler.core.report.SingleQueryReport
-
__init__
(profiler)¶ Create a traffic summary report. The data is organized by the requested groupby, and retrieves the selected columns.
-
delete
()¶ Issue a call to NetProfiler delete this report.
-
get_data
(columns=None, limit=None)¶ Retrieve data for this report.
If columns is specified, restrict the data to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_iterdata
(columns=None, limit=None)¶ Retrieve iterator for the result data.
If columns is specified, restrict the legend to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_legend
(columns=None)¶ Return legend describing the columns in this report.
If columns is specified, restrict the legend to the list of requested columns.
-
get_query_by_index
(index=0)¶ Returns the query_id by specifying the index, defaults to 0.
-
get_totals
(index=0, columns=None)¶ Retrieve the totals for this report.
If columns is specified, restrict the totals to the list of requested columns.
-
run
(groupby, columns, sort_col=None, timefilter=None, trafficexpr=None, host_group_type='ByLocation', resolution='auto', centricity='hos', area=None, sync=True, limit=None)¶ See
SingleQueryReport.run()
for a description of the keyword arguments.
-
status
()¶ Query for the status of report. If the report has not been run, this returns None.
The return value is a dict containing:
status indicating completed when finished
percent indicating the percentage complete (0-100)
remaining_seconds is an estimate of the time left until complete
-
wait_for_complete
(interval=1, timeout=600)¶ Periodically checks report status and returns when 100% complete.
-
TrafficOverallTimeSeriesReport
Objects¶
-
class
steelscript.netprofiler.core.report.
TrafficOverallTimeSeriesReport
(profiler)¶ Bases:
steelscript.netprofiler.core.report.SingleQueryReport
-
__init__
(profiler)¶ Create an overall time series report.
-
delete
()¶ Issue a call to NetProfiler delete this report.
-
get_data
(columns=None, limit=None)¶ Retrieve data for this report.
If columns is specified, restrict the data to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_iterdata
(columns=None, limit=None)¶ Retrieve iterator for the result data.
If columns is specified, restrict the legend to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_legend
(columns=None)¶ Return legend describing the columns in this report.
If columns is specified, restrict the legend to the list of requested columns.
-
get_query_by_index
(index=0)¶ Returns the query_id by specifying the index, defaults to 0.
-
get_totals
(index=0, columns=None)¶ Retrieve the totals for this report.
If columns is specified, restrict the totals to the list of requested columns.
-
run
(columns, timefilter=None, trafficexpr=None, resolution='auto', centricity='hos', area=None, sync=True)¶ See
SingleQueryReport.run()
for a description of the keyword arguments.Note that sort_col, groupby, and host_group_type are not applicable to this report type.
-
status
()¶ Query for the status of report. If the report has not been run, this returns None.
The return value is a dict containing:
status indicating completed when finished
percent indicating the percentage complete (0-100)
remaining_seconds is an estimate of the time left until complete
-
wait_for_complete
(interval=1, timeout=600)¶ Periodically checks report status and returns when 100% complete.
-
TrafficFlowListReport
Objects¶
-
class
steelscript.netprofiler.core.report.
TrafficFlowListReport
(profiler)¶ Bases:
steelscript.netprofiler.core.report.SingleQueryReport
-
__init__
(profiler)¶ Create a flow list report.
-
delete
()¶ Issue a call to NetProfiler delete this report.
-
get_data
(columns=None, limit=None)¶ Retrieve data for this report.
If columns is specified, restrict the data to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_iterdata
(columns=None, limit=None)¶ Retrieve iterator for the result data.
If columns is specified, restrict the legend to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_legend
(columns=None)¶ Return legend describing the columns in this report.
If columns is specified, restrict the legend to the list of requested columns.
-
get_query_by_index
(index=0)¶ Returns the query_id by specifying the index, defaults to 0.
-
get_totals
(index=0, columns=None)¶ Retrieve the totals for this report.
If columns is specified, restrict the totals to the list of requested columns.
-
run
(columns, sort_col=None, timefilter=None, trafficexpr=None, sync=True, limit=None)¶ See
SingleQueryReport.run()
for a description of the keyword arguments.Note that only columns, `sort_col, timefilter, trafficexpr and limit apply to this report type.
-
status
()¶ Query for the status of report. If the report has not been run, this returns None.
The return value is a dict containing:
status indicating completed when finished
percent indicating the percentage complete (0-100)
remaining_seconds is an estimate of the time left until complete
-
wait_for_complete
(interval=1, timeout=600)¶ Periodically checks report status and returns when 100% complete.
-
IdentityReport
Objects¶
-
class
steelscript.netprofiler.core.report.
IdentityReport
(profiler)¶ Bases:
steelscript.netprofiler.core.report.SingleQueryReport
-
__init__
(profiler)¶ Create a report for Active Directory events.
-
delete
()¶ Issue a call to NetProfiler delete this report.
-
get_data
(columns=None, limit=None)¶ Retrieve data for this report.
If columns is specified, restrict the data to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_iterdata
(columns=None, limit=None)¶ Retrieve iterator for the result data.
If columns is specified, restrict the legend to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_legend
(columns=None)¶ Return legend describing the columns in this report.
If columns is specified, restrict the legend to the list of requested columns.
-
get_query_by_index
(index=0)¶ Returns the query_id by specifying the index, defaults to 0.
-
get_totals
(index=0, columns=None)¶ Retrieve the totals for this report.
If columns is specified, restrict the totals to the list of requested columns.
-
run
(username=None, timefilter=None, trafficexpr=None, sync=True, limit=None)¶ Run complete user identity report over the requested timeframe.
username specific id to filter results by
timefilter is the range of time to query, a TimeFilter object
trafficexpr is an optional TrafficFilter object
- Parameters
limit (integer) – Upper limit of rows of the result data
-
status
()¶ Query for the status of report. If the report has not been run, this returns None.
The return value is a dict containing:
status indicating completed when finished
percent indicating the percentage complete (0-100)
remaining_seconds is an estimate of the time left until complete
-
wait_for_complete
(interval=1, timeout=600)¶ Periodically checks report status and returns when 100% complete.
-
WANSummaryReport
Objects¶
-
class
steelscript.netprofiler.core.report.
WANSummaryReport
(profiler)¶ Tabular or summary WAN Report data.
-
__init__
(profiler)¶ Create a WAN Traffic Summary report
-
delete
()¶ Issue a call to NetProfiler delete this report.
-
get_data
(as_list=True, calc_reduction=False, calc_percentage=False)¶ Retrieve WAN report data.
- Parameters
as_list (bool) – return list of lists or pandas DataFrame
calc_reduction (bool) – include extra column with optimization reductions
calc_percentage (bool) – include extra column with optimization percent reductions
-
get_interfaces
(device_ip)¶ Query netprofiler to attempt to automatically determine LAN and WAN interface ids.
-
get_iterdata
(columns=None, limit=None)¶ Retrieve iterator for the result data.
If columns is specified, restrict the legend to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_legend
()¶ Return legend describing the columns in this report.
If columns is specified, restrict the legend to the list of requested columns.
-
get_query_by_index
(index=0)¶ Returns the query_id by specifying the index, defaults to 0.
-
get_totals
(index=0, columns=None)¶ Retrieve the totals for this report.
If columns is specified, restrict the totals to the list of requested columns.
-
run
(lan_interfaces, wan_interfaces, direction, columns=None, timefilter='last 1 h', trafficexpr=None, groupby='ifc', resolution='auto')¶ Run WAN Report.
- Parameters
lan_interfaces – list of full interface name for LAN interface, e.g. [‘10.99.16.252:1’]
wan_interfaces – list of full interface name for WAN interface
direction ('inbound' or 'outbound') –
columns – list of columns available in both ‘in’ and ‘out’ versions, for example, [‘avg_bytes’, ‘total_bytes’], instead of [‘in_avg_bytes’, ‘out_avg_bytes’]
-
status
()¶ Query for the status of report. If the report has not been run, this returns None.
The return value is a dict containing:
status indicating completed when finished
percent indicating the percentage complete (0-100)
remaining_seconds is an estimate of the time left until complete
-
wait_for_complete
(interval=1, timeout=600)¶ Periodically checks report status and returns when 100% complete.
-
WANTimeSeriesReport
Objects¶
-
class
steelscript.netprofiler.core.report.
WANTimeSeriesReport
(profiler)¶ -
__init__
(profiler)¶ Create a WAN Time Series report.
-
delete
()¶ Issue a call to NetProfiler delete this report.
-
get_data
(as_list=True)¶ Retrieve WAN report data as list of lists or pandas DataFrame.
If as_list is True, return list of lists, False will return pandas DataFrame.
-
get_interfaces
(device_ip)¶ Query netprofiler to attempt to automatically determine LAN and WAN interface ids.
-
get_iterdata
(columns=None, limit=None)¶ Retrieve iterator for the result data.
If columns is specified, restrict the legend to the list of requested columns.
- Parameters
limit (integer) – Upper limit of rows of the result data.
-
get_legend
()¶ Return legend describing the columns in this report.
If columns is specified, restrict the legend to the list of requested columns.
-
get_query_by_index
(index=0)¶ Returns the query_id by specifying the index, defaults to 0.
-
get_totals
(index=0, columns=None)¶ Retrieve the totals for this report.
If columns is specified, restrict the totals to the list of requested columns.
-
run
(lan_interfaces, wan_interfaces, direction, columns=None, timefilter='last 1 h', trafficexpr=None, groupby=None, resolution='auto')¶ Run WAN Time Series Report
- Parameters
lan_interfaces – list of full interface name for LAN interface, e.g. [‘10.99.16.252:1’]
wan_interfaces – list of full interface name for WAN interface
direction ('inbound' or 'outbound') –
columns – list of columns available in both in_ and out_ versions, for example, [‘avg_bytes’, ‘total_bytes’], instead of [‘in_avg_bytes’, ‘out_avg_bytes’]
groupby (str) – Ignored for this report type, included for interface compatibility
-
status
()¶ Query for the status of report. If the report has not been run, this returns None.
The return value is a dict containing:
status indicating completed when finished
percent indicating the percentage complete (0-100)
remaining_seconds is an estimate of the time left until complete
-
wait_for_complete
(interval=1, timeout=600)¶ Periodically checks report status and returns when 100% complete.
-
MultiQueryReport
Objects¶
-
class
steelscript.netprofiler.core.report.
MultiQueryReport
(profiler)¶ Bases:
steelscript.netprofiler.core.report.Report
Used to generate NetProfiler standard template reports.
-
__init__
(profiler)¶ Create a report using standard NetProfiler template ids which will include multiple queries, one for each widget on a report page.
-
get_data_by_name
(query_name)¶ Return data and legend for query matching query_name.
-
get_query_names
()¶ Return full name of each query in report.
-
run
(template_id, columns=None, timefilter=None, trafficexpr=None, data_filter=None, resolution='auto')¶ The primary driver of these reports come from the template_id which defines the query sources. Thus, no query input or realm/centricity/groupby keywords are necessary.
- Parameters
template_id (int) – numeric id of the template to use for the report
columns (list) – optional list of key and value columns to retrieve (use netprofiler.columns.*), if omitted, will use template default columns instead.
timefilter – range of time to query, instance of
TimeFilter
trafficexpr – instance of
TrafficFilter
data_filter (str) – deprecated filter to run against report data
resolution (str) – data resolution, such as (1min, 15min, etc.), defaults to ‘auto’
-
steelscript.netprofiler.core.filters
¶
TimeFilter
Objects¶
-
class
steelscript.netprofiler.core.filters.
TimeFilter
(start, end)¶ -
__init__
(start, end)¶ Initialize self. See help(type(self)) for accurate signature.
-
compare_time
(t, resolution=60)¶ Return True if time t falls in between start and end times.
t may be a unix timestamp (float or string) or a datetime.datetime object
resolution is the number of seconds to use for rounding. Since NetProfiler stores data in one-minute increments, typically this should allow reasonable comparisons to report outputs. Passing zero (0) in here will enforce strict comparisons.
-
classmethod
parse_range
(s)¶ Take a range string s and return a TimeFilter object.
-
profiler_minutes
(astimestamp=False, aslocal=False)¶ Provide best guess of whole minutes for current time range.
astimestamp determines whether to return results in Unix timestamp format or as datetime.datetime objects (defaults to datetime objects).
aslocal set to True will apply local timezone to datetime objects (defaults to UTC).
NetProfiler reports out in whole minute increments, and for time deltas less than one minute (60 seconds) it will use the rounded minute from the latest timestamp. For time deltas over one minute, lowest and highest rounded minutes are used, along with all in between.
-
TrafficFilter
Objects¶
steelscript.netprofiler.core.hostgroup
¶
The Host Group module provides an interface for manipulating host group types and their host groups and hosts.
HostGroupType
Objects¶
-
class
steelscript.netprofiler.core.hostgroup.
HostGroupType
(netprofiler, id)¶ Convenience class to allow easy access to host group types.
Use this class to create new and access existing host group types. Within a host group type you can add and remove it’s
HostGroups
and their members. All changes to host group types will be local untilsave()
is called.Example accessing an existing
HostGroupType
and adding a newHostGroup
member to it:>>> byloc = HostGroupType.find_by_name(netprofiler, 'ByLocation') >>> sanfran = byloc.group['sanfran'] <HostGroup 'sanfran'> >>> sanfran.get() ['10.99.1/24'] >>> sanfran.add('10.99.2/24') ['10.99.1/24', '10.99.2/24'] >>> byloc.save()
Example creating a new
HostGroupType
,HostGroup
, and group member:>>> by_region = HostGroupType.create(netprofiler, 'ByRegion') >>> north_america = HostGroup(by_region, 'north_america') <HostGroup 'north_america'> >>> north_america.get() [] >>> north_america.add(['10.99.1/24', '10.99.2/24']) ['10.99.1/24', '10.99.2/24'] >>> by_region.save()
-
__init__
(netprofiler, id)¶ HostGroupType
should not be instantiated directly, instead usecreate()
orfind_by_name()
.
-
classmethod
create
(netprofiler, name, favorite=False, description='')¶ Create a new hostgroup type.
- Parameters
netprofiler (Netprofiler) – The Netprofiler you are using.
name (str) – The name of the new
HostGroupType
.favorite (bool) – if True, this type will be listed as a favorite.
description (str) – The hostgroup type’s description.
The new host group type will be created on the NetProfiler when
save()
is called.
-
delete
()¶ Delete this host group type and all groups.
-
classmethod
find_by_name
(netprofiler, name)¶ Find and load a host group type by name.
-
load
()¶ Load settings and groups.
-
save
()¶ Save settings and groups.
If this is a new host group type, it will be created.
-
HostGroup
Objects¶
-
class
steelscript.netprofiler.core.hostgroup.
HostGroup
(hostgrouptype, name)¶ -
__init__
(hostgrouptype, name)¶ New object representing a host group by name.
The new
HostGroup
will be automatically added to the providedHostGroupType
and can be accessed with:host_group_type.groups['group_name']
-
add
(cidrs, prepend=False, keep_together=True, replace=False)¶ Add a CIDR to this definition.
- Parameters
cidrs (string/list) – CIDR or list of CIDRS to add to this host group
prepend (bool) – if True, prepend instead of append
keep_together (bool) – if True, place new entries near the other entries in this hostgroup. If False, append/prepend to relative to the entire list.
replace (bool) – if True, replace existing config entries for this host group with
cidrs
-
clear
()¶ Clear all definitions for this host group.
-
get
()¶ Return a list of CIDRs assigned to this host group.
-
remove
(cidrs)¶ Remove a CIDR from this host group.
- Parameters
cidrs (string/list) – CIDR or list of CIDRS to remove from this host group
-
SteelScript AppResponse¶
SteelScript AppResponse¶
Welcome to the documentation for the SteelScript SDK for SteelCentral AppResponse.
As in our other product-oriented SDKs, our primary interface to an
AppResponse appliance will be through the class AppResponse
.
This object can be instantiated with a hostname and some authentication
parameters, and then used as the object for all follow-on operations.
Detailed documentation may be found on the following pages:
Report Tutorial
Example Scripts
Reporting and Configuration
Changelog and Upgrades
changelog
SteelScript AppResponse Report Tutorial¶
This tutorial will show you how to run a report against an AppResponse appliance using SteelScript for Python. This tutorial assumes a basic understanding of Python.
The tutorial has been organized so you can follow it sequentially.
Throughout the example, you will be expected to fill in details
specific to your environment. These will be called out using a dollar
sign $<name>
– for example $host
indicates you should fill
in the host name or IP address of an AppResponse appliance.
Whenever you see >>>
, this indicates an interactive session using
the Python shell. The command that you are expected to type follows
the >>>
. The result of the command follows. Any lines with a
#
are just comments to describe what is happening. In many cases
the exact output will depend on your environment, so it may not match
precisely what you see in this tutorial.
AppResponse Object¶
Interacting with an AppResponse Applicance leverages two key classes:
AppResponse
- provides the primary interface to the appliance, handling initialization, setup, and communication via REST API calls.Report
- talks through the AppResponse object to create new report and pull data when the report is completed.
To start, start Python from the shell or command line:
$ python
Python 2.7.13 (default, Apr 4 2017, 08:47:57)
[GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.38)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
Once in the python shell, let’s create an AppReponse object:
>>> from steelscript.appresponse.core.appresponse import AppResponse
>>> from steelscript.common import UserAuth
>>> ar = AppResponse('$host', auth=UserAuth('$username', '$password'))
In the above code snippet, we have created an AppResponse object, which represents a connection to an AppResponse appliance. The first argument is the hostname or IP address of the AppResponse appliance. The second argument is a named parameter and identifies the authentication method to use – in this case, simple username/password is used.
As soon as the AppResponse object is created, a connection is established to the AppResponse appliance, and the authentication credentials are validated. If the username and password are not correct, you will immediately see an exception.
The ar
object is the basis for all communication with the AppResponse
appliance, whether that is running a report, updating host groups or
downloading a pcap file. Now lets take a look at the basic information
of the AppResponse appliance that we just connected to:
>>> info = ar.get_info()
>>> info['model']
u'VSCAN-2000'
>>> info['sw_version']
u'11.2.0 #13859'
# Let's see the entire info structure
>>> info
{u'device_name': u'680-valloy1',
u'hw_version': u'',
u'mgmt_addresses': [u'10.33.158.77'],
u'model': u'VSCAN-2000',
u'serial': u'',
u'sw_version': u'11.2.0 #13859'}
Creating a Report Script¶
Let’s create our first script. We’re going to write a simple script that runs a report against a packets capture job on our AppResponse appliance.
This script will get packets from a running packets capture job. To start, make sure the targeted AppResponse appliance has a running packets capture job.
Now create a file called report.py
and insert the following code:
import pprint
from steelscript.appresponse.core.appresponse import AppResponse
from steelscript.common import UserAuth
from steelscript.appresponse.core.reports import DataDef, Report
from steelscript.appresponse.core.types import Key, Value, TrafficFilter
from steelscript.appresponse.core.reports import SourceProxy
# Fill these in with appropriate values
host = '$host'
username = '$username'
password = '$password'
# Open a connection to the appliance and authenticate
ar = AppResponse(host, auth=UserAuth(username, password))
packets_source = ar.get_capture_job_by_name('default_job')
source = SourceProxy(packets_source)
columns = [Key('start_time'), Value('sum_tcp.total_bytes'), Value('avg_frame.total_bytes')]
granularity = '10'
resolution = '20'
time_range = 'last 1 minute'
data_def = DataDef(source=source, columns=columns, granularity=granularity,
resolution=resolution, time_range=time_range)
data_def.add_filter(TrafficFilter('tcp.port==80'))
report = Report(ar)
report.add(data_def)
report.run()
pprint.pprint(report.get_data())
Be sure to fill in appropriate values for $host
, $username
and
$password
. Run this script as follows and you should see something
like the following:
$ python report.py
[(1510685000, 3602855, 772.979),
(1510685020, 4109306, 754.001),
(1510685040, 657524, 779.057)]
Let’s take a closer look at what this script is doing.
Importing Classes¶
The first few lines are simply importing a few classes that we will be using:
import pprint
from steelscript.appresponse.core.appresponse import AppResponse
from steelscript.common import UserAuth
from steelscript.appresponse.core.reports import DataDef, Report
from steelscript.appresponse.core.types import Key, Value, TrafficFilter
from steelscript.appresponse.core.reports import SourceProxy
Creating an AppResponse object¶
Next, we create an AppResponse object that establishes our connection to the target appliance:
# Open a connection to the appliance and authenticate
ar = AppResponse(host, auth=UserAuth(username, password))
Creating a Data Definition Object¶
This section describes how to create a data definition object.
Now we need to create a SourceProxy object which carries the information of source where data will be fetched.
packets_source = ar.get_capture_job_by_name('default_job')
source = SourceProxy(packets_source)
We first obtain a packet capture job object by using the name of the capture job.
packets_source = ar.get_capture_job_by_name('default_job')
To run a report against a Pcap file source, the file object can be derived as below:
packets_source = ar.get_file_by_id('$file_id')
Then we need to initialize a SourceProxy object as below:
source_proxy = SourceProxy(packets_source)
To run a report against a non-packets source, the SourceProxy object is initialized by just using the name of the source as below:
source_proxy = SourceProxy(name='$source_name')
To know the available source names, just execute the following command in shell:
$ steel appresponse sources $host -u $username -p $password
Name Groups Filters Supported on Metric Columns Granularities in Seconds
------------------------------------------------------------------------------------------------------------------------------------------------------
packets Packets False 0.1, 0.01, 0.001, 1, 10, 60, 600, 3600, 86400
aggregates Application Stream Analysis, Web True 60, 300, 3600, 21600, 86400
Transaction Analysis, UC Analysis
dbsession_summaries DB Analysis False 60, 300, 3600, 21600, 86400
sql_summaries DB Analysis False 60, 300, 3600, 21600, 86400
It shows that there are totally 4 supported sources. Note the following:
Source
aggregates
belongs to the 3 groups: Application Stream Analysis, Web Transaction Analysis and UC Analysis.Filters can be applied on the metric columns for the source
aggregates
.Filters are not supported on metric columns for source
packets
,dbsession_summaries
andsql_summaries
.
We will support native methods for accessing source information via Python in an upcoming release.
Then we select the set of columns that we are interested in collecting. Note that AppResponse supports multiple sources. Each source supports a different set of columns. Each column can be either a key column or a value column. Each row of data will be aggregated according to the set of key columns selected. The value columns define the set of additional data to collect per row. In this example, we are asking to collect total bytes for tcp packets and average total packet length for each resolution bucket.
To help identify which columns are available, just execute the helper command as below in your shell prompt.
$ steel appresponse columns $host -u $usernmae -p $password --source $source_name
For instance, to know the available columns within source packets
, we execute the
command in shell as:
$ steel appresponse columns $host -u $username -p $password --source packets
ID Description Type Metric Key/Value
----------------------------------------------------------------------------------------------------------------------------------
...
avg_frame.total_bytes Total packet length number True Value
...
start_time Used for time series data. Indicates the timestamp ---- Key
beginning of a resolution bucket.
...
sum_tcp.total_bytes Number of total bytes for TCP traffic integer True Value
Note that it would be better to pipe the output using | more
as there can be more
than 1000 rows.
Construct a list of columns, including both key columns and value columns in your script as shown below.
columns = [Key('start_time'), Value('sum_tcp.total_bytes'), Value('avg_frame.total_bytes')]
We will support native methods for accessing column information via Python in an upcoming release.
Now it is time to set the time related criteria fields. We firstly need to see the possible granularity values that the interested source supports. Running the below command in shell.
$ steel appresponse sources $host -u $username -p $password
Name Groups Filters Supported on Metric Columns Granularities in Seconds
------------------------------------------------------------------------------------------------------------------------------------------------------
packets Packets False 0.1, 0.01, 0.001, 1, 10, 60, 600, 3600, 86400
aggregates Application Stream Analysis, Web True 60, 300, 3600, 21600, 86400
Transaction Analysis, UC Analysis
dbsession_summaries DB Analysis False 60, 300, 3600, 21600, 86400
sql_summaries DB Analysis False 60, 300, 3600, 21600, 86400
As can be seen, source packets
supports graunularity values of 0.1
,
0.01
, 0.001
, 1
, 10
, 60
, 600
, 3600
and 86400
(as in seconds).
granularity = '10'
resolution = '20'
time_range = 'last 1 minute'
Setting granularity to 10
means the data source computes a
summary of the metrics it received based on intervals of 10
seconds.
Resolution is a setting in addition to granularity that tells the data source to aggregate the data further. Its numeric value must be multiple of the requested granularity value. In the script, the data will be aggregated on 20-second intervals. Setting resolution is optional.
If resolution is taken out from the script, the output would consist of 10-second summaries instead of 20-second aggregated records, similar as below.
$ python report.py
[(1510687770, 911456, 784.386),
(1510687780, 1672581, 780.85),
(1510687790, 1709843, 776.143),
(1510687800, 1338178, 797.484),
(1510687810, 1368713, 771.541),
(1510687820, 545244, 791.356)]
The parameter time_range
specifies the time range for which the data source computes
the metrics. Other valid formats include “this minute
”, “previous hour
” and
“06/05/17 17:09:00 to 06/05/17 18:09:00
”.
With all the above values derived, we can now create a DataDef
object as below.
data_def = DataDef(source=source, columns=columns, granularity=granularity, time_range=time_range)
To filter the data, it is easy to add traffic filters to the DataDef
object. Firstly let us
create a traffic filter as below.
tf = TrafficFilter('tcp.port==80')
The above filter is a steelfilter
traffic filter that output records with tcp.port == 80
.
Note that running the sources
commmand script can show whether filters can be applied on metric
columns for each source.
It is worth mentioning that packets
source also supports bpf
filter and wireshark
filter.
They both have their own syntax and set of filter fields. Other sources do not support either bpf
filter or wireshark
filter.
bpf
filter and wireshark
filter can be created as below.
bpf_filter = TrafficFilter('port 80', type_='bpf')
wireshark_filter = TrafficFilter('tcp.port==80', type_='wireshark')
Now we can add the filter to the DataDef
object.
data_def.add_filter(tf)
You can create multiple filters and add them to the DataDef
object one by one using the above method.
After creating the data definition object, then we are ready to run a report as below:
# Initialize a new report
report = Report(ar)
# Add one data definition object to the report
report.add(data_def)
# Run the report
report.run()
# Grab the data
pprint.pprint(report.get_data())
Currently, we only support one data definition per each report instance. Next release will include the ability to run multiple data definitions per each report instance. The reason for running multiple data definitions is to reuse same data source between data definitions and yield much performance gain as a result.
Extending the Example¶
As a last item to help get started with your own scripts, we will extend our example with one helpful feature: table outputs.
Rather than show how to update your existing example script, we will post the new script, then walk through key differences that add the feature.
Let us create a file table_report.py
and insert the following code:
from steelscript.appresponse.core.appresponse import AppResponse
from steelscript.common import UserAuth
from steelscript.appresponse.core.reports import DataDef, Report
from steelscript.appresponse.core.types import Key, Value, TrafficFilter
from steelscript.appresponse.core.reports import SourceProxy
# Import the Formatter class to output data in a table format
from steelscript.common.datautils import Formatter
# Fill these in with appropriate values
host = '$host'
username = '$username'
password = '$password'
# Open a connection to the appliance and authenticate
ar = AppResponse(host, auth=UserAuth(username, password))
packets_source = ar.get_capture_job_by_name('default_job')
source_proxy = SourceProxy(packets_source)
columns = [Key('start_time'), Value('sum_tcp.total_bytes'), Value('avg_frame.total_bytes')]
granularity = '10'
resolution = '20'
time_range = 'last 1 minute'
data_def = DataDef(source=source_proxy, columns=columns, granularity=granularity,
resolution=resolution, time_range=time_range)
data_def.add_filter(TrafficFilter('tcp.port==80'))
report = Report(ar)
report.add(data_def)
report.run()
# Get the header of the table
header = report.get_legend()
data = report.get_data()
# Output the data in a table format
Formatter.print_table(data, header)
Be sure to fill in appropriate values for $host
, $username
and
$password
. Run this script as follows and you should see report
result is rendered in a table format as the following:
$ python table_report.py
start_time sum_tcp.total_bytes avg_frame.total_bytes
--------------------------------------------------------------
1510685000 3602855 772.979
1510685020 4109306 754.001
1510685040 657524 779.057
As can be seen from the script, there are 3 differences.
First, we import the Formatter
class as below:
from steelscript.common.datautils import Formatter
After the report finished running, we obtain the header of the table, which is essentially a list of column names that match the report result, shown as below:
header = report.get_legend()
At last, the Formatter
class is used to render the report result in
a nice table format, shown as below:
Formatter.print_table(data, header)
Example Script Walkthroughs¶
The SteelScript AppResponse package includes several example scripts to help get started quickly with common operations and useful code to customize for specific situations. This guide will provide a summary of the scripts along with some example command-lines and the associated output.
These example scripts can be found in your SteelScript workspace, see steel mkworkspace for a guide on how to create a new workspace in your environment. Alternatively, they can be found inside the GitHub repository.
Conventions and Common Arguments¶
Throughout this discussion we will be showing the results of --help
options
for each of the scripts where they vary from the core set. All of the scripts
take the same core set of options and arguments as follows:
Usage: <SCRIPT_NAME> <HOST> [options] ...
Required Arguments:
HOST AppResponse hostname or IP address
Options:
--version show program's version number and exit
-h, --help show this help message and exit
Logging Parameters:
--loglevel=LOGLEVEL
log level: debug, warn, info, critical, error
--logfile=LOGFILE log file, use '-' for stdout
Connection Parameters:
-P PORT, --port=PORT
connect on this port
-u USERNAME, --username=USERNAME
username to connect with
-p PASSWORD, --password=PASSWORD
password to connect with
--oauth=OAUTH OAuth Access Code, in place of username/password
-A API_VERSION, --api_version=API_VERSION
api version to use unconditionally
REST Logging:
--rest-debug=REST_DEBUG
Log REST info (1=hdrs, 2=body)
--rest-body-lines=REST_BODY_LINES
Number of request/response body lines to log
This example output has no specific options to the script. To execute this script, use the following syntax:
$ <SCRIPT_NAME> ar11.example.com -u admin -p admin
That typically provides the bare minimum options for execution: the hostname, and username/password combination.
Also, we will be showing example output which in some cases may extend past the size of the formatting box, be sure to try scrolling to the right when needed to see the full command-line arguments or console output.
list_sources.py
¶
This script takes no extra arguments, and will just cycle through Capture Jobs, Clips, and Files printing out the results.
Example output:
$ python list_sources.py ar11.example.com -u admin -p admin
Capture Jobs
------------
id name mifg_id filter state start_time end_time size
-------------------------------------------------------------------------------------------------------------------------------------------------------
524abdd0-b620-4ec8-9fa0-d3e2d0376f42 test1 1000 port 80 STOPPED 0.000000000 0.000000000 0
82fc88b2-ae6a-44c7-bf6e-1ee262700ab9 port81 1000 port 81 STOPPED 0.000000000 0.000000000 0
94e116fa-11ca-40b7-9926-0f6825b4fcf2 test5 1000 STOPPED 0.000000000 0.000000000 0
a9db07eb-b330-4fad-a025-7ae9e02b7f69 port80 1000 port 80 STOPPED 0.000000000 0.000000000 0
fc8ae608-31a3-4990-b0bf-373e908f6954 default_job 1000 None RUNNING 1501182870.000000000 1501272580.000000000 16048877139
Clips
-----
id job_id start_time end_time filters
---------------------------------------------------------------------------------------------------------------------------
fc8ae608-31a3-4990-b0bf-373e908f69540000 fc8ae608-31a3-4990-b0bf-373e908f6954 1501165048 1501165348 None
Uploaded Files/PCAPs
--------------------
type id link_type format size created modified
---------------------------------------------------------------------------------------------------------
PCAP_FILE /admin/port80_export.pcap EN10MB PCAP_US 7518727 1501166729 1501166729
certificate.py
¶
This script takes no extra arguments, and will just print out details of SSL Certificate.
Example output:
$ python certificate.py ar11.example.com -u admin -p admin
-------------------------------
Certificate Details
-------------------
Subject->common_name: localhost.localdomain
Subject->country: US
Subject->state: California
Subject->organization: Riverbed Technology, Inc.
Subject->locality: San Francisco
Fingerprint->value: 1E:39:E1:6C:29:31:93:3E:39:EE:AE:BD:86:EB:44:7F:E0:C5:FB:7C
Fingerprint->algorithm: SHA1
Key->algorithm: rsaEncryption
Key->size: 2048
Issuer->common_name: localhost.localdomain
Issuer->country: US
Issuer->state: California
Issuer->organization: Riverbed Technology, Inc.
Issuer->locality: San Francisco
Valid at: 2017-10-17 10:39:44+00:00
Expires at: 2019-01-17 10:39:44+00:00
PEM: -----BEGIN CERTIFICATE-----
MIIDzzCCAregAwIBAgIJAKwfBmgqvpUNMA0GCSqGSIb3DQEBCwUAMH4xHjAcBgNV
BAMMFWxvY2FsaG9zdC5sb2NhbGRvbWFpbjEiMCAGA1UECgwZUml2ZXJiZWQgVGVj
aG5vbG9neSwgSW5jLjEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzETMBEGA1UECAwK
Q2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhc2123gxMDE3MTAzOTQ0WhcNMTkxMDE3
MTAzOTQ0WjB+MR4wHAYDVQQDDBVsb2NhbGhvc3QubG9jYWxkb21haW4xIjAgBgNV
BAoMGVJpdmVyYmVkIFRlY2hub2xvZ3ksIEluYy4xFjAUBgNVBAcMDVNhbiBGcmFu
Y2lzY28xEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTMIIBIjANBgkq
hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApspx/OhQD5REEJqAhzW+q4gHwDNgJ4x/
y3Vds20vnptJPBfDN02ZqP1n2aeg27wcBOH3PBU5DEqqB1+JaSqG/AV1JCVXy70H
CnmGaRCf6amJgiZGMSPDmOdgV3ZFKS8c/BpAwGsVfgbo8BSLK5UjgasKLYV/McQ0
Nn1YwpLtfqsnI5TdEMFJCmMKhPfIdqSbNXUeHKHctKpLlIJCfJHn76aOihHiy8kr
MSSx48XKppEpppuZSfRXs9Cf+qnhWpjXm1qr1QtuQPu9o12/Xl1/0TTHm8Zovr3g
pEs6vtpU6mDHejSV4FUxe29Uwl/ADV+8TYvVDZmdOGbj++Q8MJ6noQIDAQABo1Aw
TjAdBgNVHQ4EFgQUDYeliG8fWkDY17nXGE8Ut107qCEwHwYDVR0jBBgwFoAUDYel
iG8fWkDY17nXGE8Ut107qCEwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOC
AQEAbK8HxOQMWbszqQIMx3lc3UQ1SuWeFqLdnTBsY6AQHVXUfuwaAhrERNsZewdd
HLcO5YqnK+koj5erXfcCGJTvUiPK51vGROYiMzxqL2YcfDDEUNg9Viiz3xBZsfhW
5cAGzrvg7EQtxsEBBJH8ikTjqkFM6H2G7QnJAMFNj01S8cJs1Iy1HNOENGFGQ/GD
T8NZrvfrP9XVEhG4y8W4Czz0zDOOfUsvOe5AKrRX5E4u8OrK+y2Afej3L+KFQA0K
I9pprZqwZ59bO0j1yvTpzapjjXYXV0sWKrXAtqGUVgv/Yhvwio7X7r64rbTnH/Rt
JX07lhBGzyiC2rB1D5Kl35sgzw==
-----END CERTIFICATE-----
create_capture_job.py
¶
This script provides a means to query existing jobs and MIFGs, and also a simple mechanisim to create a new capture job with an associated filter expression.
There are a few unique options to this script, which are fairly self explanatory:
--jobname=JOBNAME job name
--ifg=IFGS ID of the MIFG or VIFG on which this job is
collecting packet data. For AR11 versions 11.5 and
later, this can be a comma-separated list of VIFG IDs.
Earlier versions can only be single values
--filter=FILTER STEELFILTER/BPF filter of the packets collected
--filter-type=FILTER_TYPE
STEELFILTER or BPF, default BPF
--show-ifgs Show list of IFG on the device
--show-jobs Show list of capture jobs on the device
Using the --show-jobs
command will output the same table as seen in
list_sources.py, and using the --show-ifgs
will show the
virtual interface groups available:
$ python create_capture_job.py ar11.example.com -u admin -p admin --show-ifgs
id name filter members
----------------------------------------------
1000 other_vifg None []
1024 vifg_7 None ['7']
1025 vifg_untagged None ['0']
1026 vifg_10 None ['10']
1027 vifg_104 None ['104']
1028 vifg_108 None ['108']
1029 vifg_32 None ['32']
1030 vifg_5 None ['5']
1031 vifg_112 None ['112']
1032 vifg_17 None ['17']
1033 vifg_6 None ['6']
1034 vifg_20 None ['20']
Creating a capture job requires just a desired job name, the IFG (either a MIFG ID or VIFG ID depending on the version of the appliance), and an optional filter expression:
$ python create_capture_job.py ar11.example.com -u admin -p admin --jobname newtest1 --filter "port 80" --ifg=1000
Successfully created packet capture job newtest1
Running the --show-jobs
option will now show the newly created capture job.
upload_pcap.py
¶
As the name implies, this script will take a PCAP file on the local system and upload it to the remote AppResponse appliance. The two extra options available are:
--filepath=FILEPATH path to pcap tracefile to upload
--destname=DESTNAME location to store on server, defaults to
<username>/<basename of filepath>
Only the --filepath
option is required.
Example output:
$ python upload_pcap.py ar11.example.com -u admin -p admin --filepath http.pcap
Uploading http.pcap
File 'http.pcap' successfully uploaded.
The properties are {'created': '1501273621', 'format': 'PCAP_US',
'access_rights': {'owner': 'admin'}, 'modified': '1501273621',
'type': 'PCAP_FILE', 'id': '/admin/http.pcap', 'link_type': 'EN10MB', 'size': 1601}
download.py
¶
This script provides a means to download packets into a local PCAP file from a variety of sources on AppResponse. Several options provide fine control over just what gets downloaded:
Source Options:
--source-file=SOURCE_FILE
source file path to export
--jobname=JOBNAME job name to export
--jobid=JOBID job ID to export
--clipid=CLIPID clip ID to export
Time and Filter Options:
--starttime=START_TIME
start time for export (timestamp format)
--endtime=END_TIME end time for export (timestamp format)
--timerange=TIMERANGE
Time range to analyze (defaults to "last 1 hour")
other valid formats are: "4/21/13 4:00 to 4/21/13
5:00" or "16:00:00 to 21:00:04.546"
--filter=FILTERS filter to apply to export, can be repeated as many
times as desired. Each filter should be formed as
"<id>,<type>,<value>", where <type> should be one of
"BPF", "STEELFILTER", "WIRESHARK", i.e.
"f1,BPF,port 80".
Output Options:
--dest-file=DEST_FILE
destination file path to export
--overwrite Overwrite the local file if it exists
Choose one of the Source Options, a time filter, and add an optional filter expression. To download a PCAP file, for example the same one we just uploaded using our upload_pcap.py, we need to specify the file path on the appliance, a destination, and use a special time filter of start:0 end:0 to make sure we get the whole PCAP rather than a slice:
$ python download.py ar11.example.com -u admin -p admin --source-file "/admin/http.pcap" --starttime=0 --endtime=0 --dest-file=http_output.pcap
Downloading to file http_output.pcap
Finished downloading to file http_output.pcap
$ ls -l
...
-rw-r--r--@ 1 root staff 1601 May 10 09:06 http.pcap
-rw-r--r-- 1 root staff 1601 Jul 28 17:04 http_output.pcap
...
To download packets from a capture job, we use slightly different options.
$ python download.py ar11.example.com -u admin -p admin --jobname default_job --timerange "last 3 seconds" --overwrite
Downloading to file default_job_export.pcap
Finished downloading to file default_job_export.pcap
packets_report.py
¶
This example provides a quick means to generate a report against a given packets source on AppResponse. The sources could be a file, clip, or running capture job, and the query can take the form of virtually any combination of key and value columns.
The availble options for this script:
Source Options:
--sourcetype=SOURCETYPE
Type of data source to run report against, i.e. file,
clip or job
--sourceid=SOURCEID
ID of the source to run report against
--keycolumns=KEYCOLUMNS
List of key column names separated by comma
--valuecolumns=VALUECOLUMNS
List of value column names separated by comma
Time and Filter Options:
--timerange=TIMERANGE
Time range to analyze, valid formats are: "06/05/17
17:09:00 to 06/05/17 18:09:00" or "17:09:00 to
18:09:00" or "last 1 hour".
--granularity=GRANULARITY
The amount of time in seconds for which the data
source computes a summary of the metrics it received.
--resolution=RESOLUTION
Additional granularity in seconds to tell the data
source to aggregate further.
Output Options:
--csvfile=CSVFILE CSV file to store report data
The critical items in this report are the --keycolumns
and
--valuecolumns
options. Together they will define how the format of the
resulting data will look. Virtually any combination of available fields can be
used either as a key or a value. Choosing the Key columns will define how each
of the rows are grouped and ensure they will be unique – think of them as Key
columns to a SQL table. The Value columns will be any value that matches up
with the Keys.
A simple packets report using src_ip and dest_ip as the keys, and bytes and packets as the values:
$ python packets_report.py ar11.example.com -u admin -p admin --sourcetype=job \
--sourceid=default_job --keycolumns=src_ip.addr,dst_ip.addr \
--valuecolumns=sum_traffic.total_bytes,sum_traffic.packets --timerange='last 10 seconds' --granularity=1 \
--filterexpr 'tcp.port==80'
src_ip.addr,dst_ip.addr,sum_traffic.total_bytes,sum_traffic.packets
3ffe::300:ff:fe00:62,3ffe::200:ff:fe00:2,888,12
192.70.163.102,192.70.0.4,2056,14
10.33.122.39,10.5.39.140,66,1
3ffe::200:ff:fe00:2,3ffe::300:ff:fe00:62,9602,7
10.64.101.226,10.64.101.2,57675,79
10.64.101.2,10.64.101.226,69775,86
107.178.255.114,10.33.122.39,611,4
192.70.163.103,192.70.0.4,1403,11
10.33.122.39,107.178.255.114,310,4
10.64.101.225,10.8.117.12,96690,134
10.8.117.12,10.64.101.225,31662,65
bad:dad:cafe::1eb9:a44b,bad:dad:cafe::2ec3:ae55,8432,58
34.197.206.192,10.33.124.26,60,1
192.70.0.3,192.70.84.228,27765,21
10.33.124.26,34.197.206.192,60,1
10.64.101.225,10.8.117.10,132,2
.... snipped ....
For a complete listing of the available columns to choose, see the output of the builtin command steel appresponse columns.
general_report.py
¶
This example provides a quick means to generate a report against a given non-packets
source on AppResponse. The source could be any one of the supported sources except
packets
, and the query can take the form of virtually any combination of key and value
columns that are supported by the selected source.
The availble options for this script:
Source Options:
--showsources Display the set of source names
--sourcename=SOURCENAME
Name of source to run report against, i.e. aggregates,
flow_tcp, etc.
--keycolumns=KEYCOLUMNS
List of key column names separated by comma
--valuecolumns=VALUECOLUMNS
List of value column names separated by comma
Time and Filter Options:
--timerange=TIMERANGE
Time range to analyze, valid formats are: "06/05/17
17:09:00 to 06/05/17 18:09:00" or "17:09:00 to
18:09:00" or "last 1 hour".
--granularity=GRANULARITY
The amount of time in seconds for which the data
source computes a summary of the metrics it received.
--resolution=RESOLUTION
Additional granularity in seconds to tell the data
source to aggregate further.
--filtertype=FILTERTYPE
Traffic filter type, needs to be one of 'steelfilter',
'wireshark', 'bpf', defaults to 'steelfilter'
--filterexpr=FILTEREXPR
Traffic filter expression
Output Options:
--csvfile=CSVFILE CSV file to store report data
A simple general report that outputs applications with response time larger than 1 second over the last 1 minute can be run as follows:
$ python general_report.py ar11.example.com -u admin -p admin \
--keycolumns app.id --valuecolumns app.name,avg_tcp.srv_response_time,avg_tcp.user_response_time \
--source aggregates --timerange 'last 1 min' --granularity 60 \
--filterexpr 'avg_tcp.user_response_time>1'
app.id,app.name,avg_tcp.srv_response_time,avg_tcp.user_response_time
1000,Quantcast,2.108343132,3.559153813
1002,Rambler.ru,0.332615682,6.157294029
1003,Rapleaf,0.759893196,8.380697625
.... snipped ....
For a complete list of available source names to choose from, see the ouput of the built-in command steel appresponse sources.
ssl_keys.py
¶
This script takes no extra arguments, and will import SSL Key, print out it details and delete the key.
Example output:
$ python ssl_keys.py ar11.example.com -u admin -p admin
---Import SSL Key---
Key successfully imported
<SSL_Key 1/Demo_Key_7>
---SSL Keys Count---
1
---SSL Key Details---
ID: 1
Name: Demo_Key_7
Description: Demo_Description_7
Timestamp: 2018-10-17 14:22:13+00:00
---Delete SSL Key---
Key deleted.
---SSL Keys Count---
0
system_update.py
¶
This script takes no extra arguments. It will fetch an update image from provided url, print out the details of the image and delete it. It will also print out the details of current update state.
Example output:
$ python system_update.py ar11.example.com -u admin -p admin
---Update images---
No images available
---Fetch image---
Please, enter an update image url: http://support.riverbed.com/update/current/update.iso
Fetch successfully started
Wait 5 sec ...
---Update Image Details---
ID: 1
State: UPLOADING
State Description:
Version: N/A
Progress: 15.17
Checksum: N/A
---Delete Image---
Image deleted
---Update Details---
State: IDLE
State Description:
Last State Time: 2018-10-17 14:08:38+00:00
Target Version: None
Update History:
Time: 2018-10-17 14:35:20+00:00 Version: 11.6.0 #23947
--Initialize an update if in IDLE state or reset it--
Update state: IDLE
Initializing and resetting
Wait 10 sec ...
Resetting into IDLE state
---How to execute an update---
In order to execute an update run those steps:
1. Initialize update: update.initialize()
2. Run update: update.start()
Those steps will bring box down and it will be inaccessible for some time
update_host_groups.py
¶
This script provides a simple interface to the Host Group functionality within appresponse. It will display, update, or create new hostgroups as needed.
The custom options are:
HostGroup Options:
--file=FILE Path to the file with hostgroup info, each line should
have three columns formated as:
"<hostgroup_name>
<subnet_1>,<subnet_2>,...,<subnet_n>"
--name=NAME Namme of host group to update or delete
--id=ID ID of the host group to update or delete
--hosts=HOSTS List of hosts and host-ranges
--disabled Whether host group should be disabled
--operation=OPERATION
show: render configured hostgroups
add: add one hostgroup
update: update one hostgroup
upload: upload a file with hostgroups
delete: delete one hostgroup
clear: clear all hostgroups
The --operation
option controls the primary action of the script,
which can be one of the several values shown in the help screen. Using
the operation show, we can see all of the configured Host Groups:
> python update_host_groups.py ar11.example.com -u admin -p admin --operation show
id name active definition
-------------------------------------------------------------------------
14 test5 True ['4.4.4.4-4.4.4.4']
15 test7 True ['3.3.0.0-3.3.255.255', '4.2.2.0-4.2.2.255']
In order to add new groups, we can either use the options to create them one by
one, or we can use a specially formatted file to upload them all at once. Take
the following file named hostgroup_upload.csv
, for example:
CZ-Prague-HG 10.143.58.64/26,10.143.58.63/23
MX-SantaFe-HG 10.194.32.0/23
KR-Seoul-HG 10.170.55.0/24
ID-Surabaya-HG 10.234.9.0/24
Now, let’s upload this to the server:
> python update_host_groups.py ar11.example.com -u admin -p admin --operation upload --file hostgroup_upload.csv
Successfully uploaded 4 hostgroup definitions.
And if we re-run our show operation, we will see our groups in the listing:
> python update_host_groups.py ar11.example.com -u admin -p admin --operation show
id name active definition
-------------------------------------------------------------------------
14 test5 True ['4.4.4.4-4.4.4.4']
15 test7 True ['3.3.0.0-3.3.255.255', '4.2.2.0-4.2.2.255']
16 CZ-Prague-HG True ['10.143.58.0-10.143.59.255', '10.143.58.64-10.143.58.127']
17 MX-SantaFe-HG True ['10.194.32.0-10.194.33.255']
18 KR-Seoul-HG True ['10.170.55.0-10.170.55.255']
19 ID-Surabaya-HG True ['10.234.9.0-10.234.9.255']
Reporting and Configuration Class Reference¶
AppResponse
Objects¶
-
class
steelscript.appresponse.core.appresponse.
AppResponse
(host, auth, port=443, versions=None)¶ Main interface to interact with a AppResponse appliance.
-
__init__
(host, auth, port=443, versions=None)¶ Initialize an AppResponse object.
- Parameters
host (str) – name or IP address of the AppResponse appliance.
auth – defines the authentication method and credentials to use to access the AppResponse. It should be an instance of
UserAuth
orOAuth
port – integer, port number to connect to appliance
versions (dict) – service versions to use, keyed by the service name, value is a list of version strings that are required by the external application. If unspecified, this will use the latest version of each service supported by both this implementation and the AppResponse appliance.
-
create_report
(data_def_request)¶ Helper method to initiate an AppResponse report.
- Parameters
data_def_request (DataDef) – Single DataDef object defining the report criteria.
-
find_service
(name)¶ Return a ServiceDef for a given service name.
-
get_capture_job_by_name
(name)¶ Find a capture job by name.
-
get_capture_jobs
()¶ Get a list of all existing capture jobs.
-
get_column_objects
(source_name, columns)¶ Return proper Key/Value objects for given list of column strings.
- Parameters
source_name – string value of source name
columns – list of columns as strings
- Returns
column objects
-
get_info
()¶ Get the basic info of the device.
-
property
service_manager
¶ Initialize the service manager instance if it does not exist.
-
upload
(dest_path, local_file)¶ Upload a local file to the AppResponse 11 device.
- Parameters
dest_path – path where local file will be stored at AppResponse device
local_file – path to local file to be uploaded
- Returns
location information if resource has been created, otherwise the response body (if any).
-
property
versions
¶ Determine version strings for each required service.
-
Reporting Objects¶
-
class
steelscript.appresponse.core.reports.
DataDef
(source, columns, start=None, end=None, duration=None, time_range=None, granularity=None, resolution=None, limit=None, topbycolumns=None, live=False, retention_time=3600)¶ Interface to build a data definition for uploading to a report.
-
__init__
(source, columns, start=None, end=None, duration=None, time_range=None, granularity=None, resolution=None, limit=None, topbycolumns=None, live=False, retention_time=3600)¶ Initialize a data definition request object.
- Parameters
source – Reference to a source object. If a string, will try to convert to a SourceProxy
columns – list Key/Value column objects.
start – epoch start time in seconds.
end – epoch endtime in seconds.
duration – string duration of data def request.
time_range – string time range of data def request.
granularity (int) – granularity in seconds. Required.
resolution (int) – resolution in seconds. Optional
limit – limit to number of returned rows. Optional
topbycolumns – Key/Value columns to be used for topn. Optional.
live – boolean for whether this is a live retrieval data_def. Setting this to true changes the behavior somewhat, see notes.
retention_time – int seconds for how long to store data before overwriting buffer. Only applicable for live reports.
For defining the overall time for the report, either a single time_range string may be used or a combination of start/end/duration.
Further discussion on granularity and resolution: Granularity refers to the amount of time for which the data source computes a summary of the metrics it received. The data source examines all data and creates summaries for 1 second, 1 minute, 5 minute, 15 minute, 1 hour, 6 hour and 1 day, etc. Greater granularity (shorter time periods) results in greater accuracy. Lesser granularity (1 hour, 6 hours, 1 day) requires less processing and therefore the data is returned faster. Granularity must be specified as number of seconds.
Resolution must be multiple of the requested granularity. For example, if you specify granularity of 5mins (300 seconds) then the resolution can be set to 5mins, 10mins, 15mins, etc. If the resolution is set to be equal of the granularity then it has no effect to the number of returned samples. The resolution is optional.
Notes: Live reports can be created by setting the option live to True.
This will zero out any timefilter that may have been applied, and will use a retention time value that determines how long to keep the data in a rolling buffer. Retention time defaults to one hour (360 seconds).
-
add_filter
(filter)¶ Add one traffic filter to the data def.
- Parameters
filter – types.TrafficFilter object
-
-
class
steelscript.appresponse.core.reports.
Report
(appresponse)¶ Main interface to build and run a report on AppResponse.
-
__init__
(appresponse)¶ Initialize a new report.
- Parameters
appresponse – the AppResponse object.
-
add
(data_def_request)¶ Add one data definition request.
-
delete
()¶ Delete the report from the appliance.
-
get_data
(index=0)¶ Return data for the indexed data definition requests.
Note for live data objects index cannot be None, only explicit requests are allowed. If multiple data_defs in a report need to collect data, query them individually.
Also, the object returned from a live query will be a data_def_results object (https://support.riverbed.com/apis/npm.probe.reports/1.0/service.html#types_data_def_results) The data can be referenced via data[‘data’] but meta data about the results including endtime and startime can be found at data[‘meta’]
- Parameters
index (int) – Set to None to return data from all data definitions, defaults to returning the data from just the first data def.
-
get_dataframe
(index=0)¶ Return data in pandas DataFrame format.
This will return a single DataFrame for the given index, unlike
get_data
andget_legend
which will optionally return info for all data defs in a report.Requires `pandas` library to be available in environment.
- Parameters
index (int) – DataDef to process into DataFrame. Defaults to 0.
-
get_legend
(index=0, details=False)¶ Return legend information for the data definition.
- Parameters
index (int) – Set to None to return data from all data definitions, defaults to returning the data from just the first data def.
details (bool) – If True, return complete column dict, otherwise just short label ids for each column will be returned
-
run
()¶ Create and run a report instance with stored data definitions.
-
SteelScript SteelHead¶
SteelScript SteelHead¶
The SteelHead package offers a set of interfaces to control and work with a SteelHead appliance.
Documentation available in this module:
Class Reference
SteelScript SteelHead Tutorial¶
This tutorial will walk through the main components of the SteelScript interfaces for Riverbed SteelHead Appliance. It is assumed that you have a basic understanding of the Python programming language.
The tutorial has been organized so you can follow it sequentially.
Throughout the examples, you will be expected to fill in details
specific to your environment. These will be called out using a dollar
sign $<name>
– for example $host
indicates you should fill in
the host name or IP address of a SteelHead appliance.
Whenever you see >>>
, this indicates an interactive session using
the Python shell. The command that you are expected to type follows
the >>>
. The result of the command follows. Any lines with a
#
are just comments to describe what is happening. In many cases
the exact output will depend on your environment, so it may not match
precisely what you see in this tutorial.
Background¶
Riverbed SteelHead is the industry’s #1 optimization solution for accelerated delivery of all applications across the hybrid enterprise. SteelHead also provides better visibility into application and network performance and the end user experience plus control through an application-aware approach to hybrid networking and path selection based on centralized, business intent-based policies for what you want to achieve – as a business. SteelScript for SteelHead offers a set of interfaces to control and work with a SteelHead appliance.
Operation Overview¶
Interacting with a SteelHead appliance via a python script involves two steps. The first step is to obtain a SteelHead object. The second step is to send command to the appliance via the existing SteelHead object. Below we will describe both steps in details.
Obtaining a SteelHead Object¶
As with any Python code, the first step is to import the module(s) we
intend to use. The SteelScript code for working with SteelHead appliances
resides in a module called steelscript.steelhead.core.steelhead
.
The main class in this module is SteelHead
.
This object represents a connection to a SteelHead appliance.
To start, start python from the shell or command line:
$ python
Python 2.7.3 (default, Apr 19 2012, 00:55:09)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
Once in the python shell, let’s create a SteelHead object:
>>> from steelscript.steelhead.core import steelhead
>>> from steelscript.common.service import UserAuth
>>> auth = UserAuth(username=$username, password=$password)
>>> sh = steelhead.SteelHead(host=$host, auth=auth)
At first the module of steelscript.steelhead.core.steelhead
and
steelscript.common.service
are imported. Two classes were used,
including UserAuth
and
SteelHead
. The object auth
is created
by instantiating the UserAuth
class with username and password to access an SteelHead appliance. Afterwards,
a SteelHead object is created by instantiating the SteelHead class with
the hostname or IP address of the SteelHead appliance and the existing
authentication object. Note that the arguments $username
and $password
need to be replaced with the actual username and password, and the argument
$host
need to be replaced with the hostname or IP address of the SteelHead
appliance.
As soon as the SteelHead
object is created, a connection is
established to the appliance, and the authentication credentials are
validated. If the username and password are not correct, you will
immediately see an exception.
Sending commands¶
As soon as a SteelHead object is available, commands can be sent to the SteelHead appliance via two kinds of interfaces: Command Line Interface (CLI) and Application Program Interface (API). CLI is mainly used if the end user just wants to view the output as it returns well formatted string. In contrast, API returns python data objects and therefore can be used for further data analysis and processing.
See below for a detailed description for both interfaces are presented using concrete examples.
Note that sh
will be used to reference the existing SteelHead object, which is the
basis for all communication with the SteelHead appliance.
We can get some basic version information as follows.
>>> print sh.cli.exec_command("show version")
Product name: rbt_sh
Product release: 8.5.2
Build ID: #39
Build date: 2013-12-20 10:10:02
Build arch: i386
Built by: mockbuild@bannow-worker4
Uptime: 153d 10h 8m 29s
Product model: 250
System memory: 2063 MB used / 974 MB free / 3038 MB total
Number of CPUs: 1
CPU load averages: 0.23 / 0.15 / 0.10
As shown above, a CLI object is obtained by referencing the cli
attribute
of sh
. Afterwards, a method exec_command
can be called via the existing CLI
object. Note that the string argument is the actual CLI command that is run as if it
were executed on the SteelHead appliance.
When one logs into a SteelHead appliance, he/she will be in one of three modes on a shell terminal, including basic mode, enable mode and configure mode. The CLI interface from the SteelHead object defaults to enable mode. In order to enter into configure mode, the user need to either use a “mode” parameter or change the default mode to configure mode. The first method applies to scenarios when one just needs to run no more than a few commands in configure mode, as shown below:
>>> from steelscript.cmdline.cli import CLIMode
>>> sh.cli.exec_command("show version", mode=CLIMode.CONFIG)
In contrast, if the user wants to engage in a fair amount of interactions with SteelHead appliance in configure mode, it is recommended to change the default to configure mode, as shown below:
>>> from steelscript.cmdline.cli import CLIMode
>>> sh.cli.default_mode = CLIMode.CONFIG
>>> sh.cli.exec_command("show version")
If the user wants to obtain python data objects via the SteelHead object sh
instead of just viewing the output, he/she should use the API interface.
The key components of the API interface are the Model and Action class.
Model class is used if the desired data is a property of a SteelHead appliance,
which can usually be derived by executing just one command.
On the other hand, the Action class is intended to include higher-level methods,
deriving data by taking some extra processing in addition to just one command.
For instance, to obtain the version information of a SteelHead appliance should
be using the Model class as follows:
>>> from pprint import pprint
>>> from steelscript.common.interaction.model import Model
>>> model = Model.get(sh, feature='common')
>>> pprint(model.show_version())
{u'build arch': u'i386',
u'build id': u'#39',
u'built by': u'mockbuild@bannow-worker4',
u'number of cpus': 1,
u'product model': u'250',
u'product name': u'rbt_sh',
u'product release': u'8.5.2'}
In contrast, to get the product information of the SteelHead requires further processing of the output of the version information above, thus the Action class should be used as follows:
>>> from pprint import pprint
>>> from steelscript.common.interaction.action import Action
>>> action = Action.get(sh, feature='common')
>>> pprint(action.get_product_info())
{u'model': u'250', u'name': u'SteelHead', u'release': u'8.5.2'}
From the above two examples, we can summarize on the procedure of using API to
obtain data from a SteelHead. First of all, the Model or Action class is imported.
Secondly, the Model or Action object is created by passing the SteelHead object sh
and a feature string “common” to the get class method associated with either Model or Action class.
The last and most important step is to call a method associated with the derived Model
or Action object according to the specific data that is desired.
There are a total of 5 features available: ‘common’, ‘networking’, ‘optimization’, ‘flows’ and ‘stats’.
Each feature is bound to a model and action object with a set of associated methods.
Methods supported by each feature can be found at SteelScript SteelHead Sources.
Note that both of the above-mentioned examples yield data as a python dictionary instead
of a well-formatted string.
Before moving on, exit the python interactive shell:
>>> [Ctrl-D]
$
Extending the Example¶
As a last item to help get started with your own scripts, we will post a new script below, then walk through the key sections in the example script.
#!/usr/bin/env python
import steelscript.steelhead.core.steelhead as steelhead
from steelscript.common.service import UserAuth
from steelscript.common.app import Application
class ShowVersionApp(Application):
def add_positional_args(self):
self.add_positional_arg('host', 'SteelHead hostname or IP address')
def add_options(self, parser):
super(ShowVersionApp, self).add_options(parser)
parser.add_option('-u', '--username', help="Username to connect with")
parser.add_option('-p', '--password', help="Password to use")
def validate_args(self):
super(ShowVersionApp, self).validate_args()
if not self.options.username:
self.parser.error("User Name needs to be specified")
if not self.options.password:
self.parser.error("Password needs to be specified")
def main(self):
auth = UserAuth(username=self.options.username,
password=self.options.password)
sh = steelhead.SteelHead(host=self.options.host, auth=auth)
print sh.cli.exec_command("show version")
ShowVersionApp().run()
Let us break down the script. First we need to import some items:
#!/usr/bin/env python
import steelscript.steelhead.core.steelhead as steelhead
from steelscript.common.app import Application
That bit at the top is called a shebang, it tells the system that it should execute this script using the program after the ‘#!’. Besides steelhead module, we are also importing the Application class, which is used to help parse arguments and simplify the api call to run the application.
class ShowVersionApp(Application):
def add_options(self, parser):
super(ShowVersionApp, self).add_options(parser)
parser.add_option('-H', '--host',
help='hostname or IP address')
parser.add_option('-u', '--username', help="Username to connect with")
parser.add_option('-p', '--password', help="Password to use")
def validate_args(self):
super(ShowVersionApp, self).validate_args()
if not self.options.host:
self.parser.error("Host name needs to be specified")
if not self.options.username:
self.parser.error("User Name needs to be specified")
if not self.options.password:
self.parser.error("Password needs to be specified")
This section begins the definition of a new class, which inherits from the
class Application. This is some of the magic of object-oriented programming,
a lot of functionality is defined as part of Application, and we get all
of that for free, just by inheriting from it. In fact, we go beyond that,
and extend its functionality by defining the function add_options
and
validate_args
. Here, we add options to pass in a host name, a user name and
a password, and then if the format of the passed-in arguments in the command
is wrong, a help message will be printed out.
def main(self):
auth = UserAuth(username=self.options.username,
password=self.options.password)
sh = steelhead.SteelHead(host=self.options.host, auth=auth)
print (sh.cli.exec_command("show version"))
ShowVersionApp().run()
This is the main part of the script, and it is using the CLI interface. One can easily modify it to use any API interface to fetch data from a SteelHead appliance. The last line calls the run function as defined in the Application class, which executes the main function defined in the ShowVersionApp class.
Now let us try to run the script. Copy the code into a new file show_version_example.py
,
make it executable and run it from command line. Note that host
, username
, password
are now all items to be passed to the command, shown as below.
$ chmod +x show_version_example.py
$ show_version_example.py $host -u $username -p $password
Product name: rbt_sh
Product release: 8.5.2
Build ID: #39
Build date: 2013-12-20 10:10:02
Build arch: i386
Built by: mockbuild@bannow-worker4
Uptime: 153d 10h 8m 29s
Product model: 250
System memory: 2063 MB used / 974 MB free / 3038 MB total
Number of CPUs: 1
CPU load averages: 0.23 / 0.15 / 0.10
SteelScript SteelHead Sources¶
This module contains the SteelHead class - the main interface to a SteelHead appliance.
CLIAuth
Objects¶
-
class
steelscript.steelhead.core.steelhead.
CLIAuth
(username, password=None, private_key_path=None)¶ This class is used for username/password based authentication for command-line access.
SteelHead
Objects¶
CommonModel
Objects¶
Returned by Model.get(steelhead_instance, feature='common')
.
-
class
steelscript.steelhead.features.common.v8_5.model.
CommonModel
(resource, cli=None, **kwargs)¶ Kauai model for the ‘common’ REST Service on the SteelHead product.
-
show_version
()¶ Returns parsed output of ‘show version’.
Product name: rbt_sh Product release: 9.0.1 Build ID: #19 Build date: 2014-11-19 01:59:36 Build arch: x86_64 Built by: mockbuild@bannow-worker4 Uptime: 15d 23h 22m 33s Product model: CX1555 System memory: 6378 MB used / 1552 MB free / 7931 MB total Number of CPUs: 4 CPU load averages: 0.08 / 0.17 / 0.10
- Returns
Dictionaries of values returned
{'product name': 'rbt_sh', 'product release': '9.0.1', 'build id': '#19', 'build arch': 'x86_64', ...
-
CLI
(CommonAction) Objects¶
Returned by Action.get(steelhead_instance, feature='common')
.
-
class
steelscript.steelhead.features.common.v8_5.action.
CLI
(resource, service=None, feature=None)¶ CLI-based Actions for the ‘common’ REST Service on the SteelHead product.
-
get_product_info
()¶ Gets basic software and hardware product information.
- Returns
Dictionary of values:
{'name': 'SteelHead', 'model': 'CX1555', 'release': '9.0.1'}
-
FlowsModel
Objects¶
Returned by Model.get(steelhead_instance, feature='flows')
.
-
class
steelscript.steelhead.features.flows.v8_5.model.
FlowsModel
(resource, cli=None, **kwargs)¶ Kauai Flows model for the SteelHead product
-
show_flows
(type='all')¶ Method to show Flows on a SteelHead. Currently, some flow types are not supported and will not be included in the output. These types are IPv6 and pre_existing connections.
- Parameters
type (string) – Optional parameter to select the type of Flows. Valid choices include all, optimized, passthrough, packet-mode, and tcp-term.
- Returns
dictionary
{ 'flows_list': [ {'app': 'UDPv4', 'destination ip': IPv4Address(u'10.190.5.2'), 'destination port': 1003, 'reduction': 99, 'since': {'day': '10', 'hour': '23', 'min': '58', 'month': '02', 'secs': '01', 'year': '2014'}, 'source ip': IPv4Address(u'10.190.0.1'), 'source port': 406, 'type': 'N'},...], 'flows_summary': { 'denied': {'all': 1}, 'discarded': {'all': 1}, 'establishing': {'all': 1, 'v4': 2, 'v6': 3}, 'forwarded': {'all': 1, 'v4': 2, 'v6': 3}, 'half_closed optimized': {'all': 11, 'v4': 22, 'v6': 33}, 'half_opened optimized': {'all': 1, 'v4': 2, 'v6': 3}, 'optimized': {'all': '1', 'v4': 2, 'v6': 3}, 'packet_mode optimized': {'all': 11, 'v4': 22, 'v6': 33}, 'passthrough': {'all': 11, 'v4': 22, 'v6': 33}, 'passthrough intentional': {'all': 1, 'v4': 2, 'v6': 3}, 'passthrough unintentional': {'all': 11, 'v4': 22, 'v6': 33}, 'passthrough unintentional packet_mode': {'all': 11, 'v4': 22, 'v6': 33}, 'passthrough unintentional terminated': {'all': 1, 'v4': 2, 'v6': 3}, 'rios only': {'all': 1, 'v4': 3, 'v6': 3}, 'rios scps': {'all': 1, 'v4': 2, 'v6': 3}, 'scps only': {'all': 11, 'v4': 22, 'v6': 33}, 'tcp proxy': {'all': 1, 'v4': 2, 'v6': 3}, 'total': {'all': 1, 'v4': 2, 'v6': 3} }
-
CLI
(FlowsAction) Objects¶
Returned by Action.get(steelhead_instance, feature='flows')
.
-
class
steelscript.steelhead.features.flows.v8_5.action.
CLI
(resource, service=None, feature=None)¶ Kauai Flows CLI Delegatee
-
show_flows_optimized
()¶ Method to show optimized flows on a Steelhead
- Returns
dictionary
{ 'flows_list': [ {'app': 'UDPv4', 'destination ip': IPv4Address('10.190.5.2'), 'destination port': 1003, 'percent': 99, 'since': {'day': '10', 'hour': '23', 'min': '58', 'month': '02', 'secs': '01', 'year': '2014'}, 'source ip': IPv4Address('10.190.0.1'), 'source port': 406, 'type': 'N'},...], 'flows_summary': { 'established optimized': {'all': 1, 'v4': 2, 'v6': 3}, 'packet_mode optimized': {'all': 11, 'v4': 22, 'v6': 33}, 'rios only': {'all': 1, 'v4': 3, 'v6': 3}, 'rios scps': {'all': 1, 'v4': 2, 'v6': 3}, 'scps only': {'all': 11, 'v4': 22, 'v6': 33}, 'tcp proxy': {'all': 1, 'v4': 2, 'v6': 3}, 'total': {'all': 11', v4: 40, 'v6': 70}} }
-
show_flows_passthrough
()¶ Method to show passthrough flows on a Steelhead
- Returns
dictionary
{ 'flows_list': [ {'app': 'TCP', 'destination ip': IPv4Address('10.190.174.120'), 'destination port': 443, 'since': {'day': '02', 'hour': '06', 'min': '00', 'month': '01', 'secs': '50', 'year': '2014'}, 'source ip': IPv4Address('10.3.2.54'), 'source port': 40097, 'type': 'PI'}...], 'flows_summary': { 'forwarded': {'all': 1, 'v4': 2, 'v6': 3}, 'passthrough': {'all': 11, 'v4': 22, 'v6': 33}, 'passthrough intentional': {'all': 1, 'v4': 2, 'v6': 3}, 'passthrough unintentional': {'all': 11, 'v4': 22, 'v6': 33}, 'passthrough unintentional packet_mode': {'all': 11, 'v4': 22, 'v6': 33}, 'passthrough unintentional terminated': {'all': 1, 'v4': 2, 'v6': 3}, 'total': {'all': 11, 'v4': 40, 'v6': 70}} }
-
NetworkingModel
Objects¶
Returned by Model.get(steelhead_instance, feature='networking')
.
-
class
steelscript.steelhead.features.networking.v8_5.model.
NetworkingModel
(resource, cli=None, **kwargs)¶ Kauai Networking model for the SteelHead product
-
show_interfaces
(interface=None, brief=False)¶ Return parsed output of ‘show interfaces <interface> [brief]’:
Interface inpath0_0 state Up: yes Interface type: ethernet IP address: 10.11.100.2 Netmask: 255.255.255.0 IPv6 link-local address: fe80::5054:ff:fe10:3fe9/64 MTU: 1500 HW address: 52:54:00:10:3F:E9 Traffic status: Normal HW blockable: no Counters cleared date: 2014/01/31 14:28:28
- Parameters
interface (string) – Optional. Return just this interface.
brief (boolean) – Whether to run just brief output.
- Returns
List of dictionaries of values returned:
{ 'name': 'inpath0_0', 'ip address': IPv4Interface('10.11.100.2/24'), 'ipv6 address': IPv6Interface('fe80::5054:ff:fe10:3fe9/64'), 'hw address': EUI('52-54-00-10-3F-E9'), 'up': True, 'rx bytes': 42, ...}
-
show_interfaces_configured
(interface=None)¶ Return parsed output of ‘show interfaces <interface> configured’:
Interface inpath0_0 state Enabled: yes DHCP: yes Dynamic DNS DHCP: yes DHCPv6: no Dynamic DNS DHCPv6: no IP address: 10.11.100.2 Netmask: 255.255.255.0 IPv6 address: Speed: auto Duplex: auto MTU: 1500
- Parameters
interface (string) – Optional. Return just this interface.
- Returns
List of dictionaries of values returned
{'name': 'inpath0_0', 'enabled': True 'dhcp': True ip address': IPv4Interface('10.11.100.2/24'), 'ipv6 address': IPv6Interface('fe80::5054:ff:fe10:3fe9/64'), 'mtu': 1500, ...}
-
StatsModel
Objects¶
Returned by Model.get(steelhead_instance, feature='stats')
.
-
class
steelscript.steelhead.features.stats.v8_5.model.
StatsModel
(resource, cli=None, **kwargs)¶ Kauai Stats model for the SteelHead product
-
show_stats_bandwidth
(port='all', type=None, frequency=None)¶ Method to show Bandwidth Stats on a SteelHead
- Parameters
port (string) – Optional parameter to filter the bandwidth summary to traffic on a specific port. The value is simply the port number (e.g., “80”) and defaults to “all.”
type (string) – The type of traffic to summarize. Options include bi-directional, lan-to-wan, and wan-to-lan.
frequency – Last period of time to lookback during stats collection. Options include 1min, 5min, hour, day, week, or month.
- Returns
dictionary
{ 'wan data': '5.4 GB', 'lan data': '6 GB', 'data reduction': 10, 'data reduction peak': 95, 'data reduction peak time': '2014/12/05 14:50:00', 'capacity increase': '1.1' }
-
SteelScript SCC¶
SteelScript SCC¶
Python modules for interacting with SCC appliances.
Documentation available in this module:
Class Reference
SteelScript SCC Tutorial¶
This tutorial presents a step-by-step description of how to use SteelScript SCC package to develop scripts to retrieve data from target SCC device.
Background¶
SCC provides centralized reporting and analysis of the states of other connected riverbed appliances (i.e., SteelHead). SteelScript for SCC makes this wealth of data easily accessible via Python.
SCC Objects¶
Interacting with a SCC leverages two key classes:
SCC
- provides the primary interface to the appliance, handling initialization, setup and communication via REST API calls.BaseStatsReport
- leverages the SCC object to pull data and create new reports.
In most cases you will not use BaseStatsReport
directly – your scripts will use a more helpful object tailored to the
desired report, such as a
BWTimeSeriesStatsReport
or a
ThroughputStatsReport
.
We will cover those shortly.
Startup¶
As with any Python code, the first step is to import the modules involved.
The SteelScript code for working with SCC appliances resides in a module
steelscript.scc.core
. The main class in this module is
SCC
. This object represents a connection to an
SCC appliance. Let’s see how easy it is to create an SCC object.
>>> from steelscript.scc.core import SCC
>>> from steelscript.common.service import OAuth
>>> scc = SCC(host='$hostname', auth=OAuth('$access_code'))
Replace the first argument $hostname
with the hostname or IP address
of the SCC appliance. The second argument is an access code,
which is required for OAuth 2.0 authentication. The access code is usually
obtained on the web UI of the SCC appliance (See the “Enabling REST API Access”
section in your SCC documentation for more information).
Generating Reports¶
After an SCC object has been instantiated, now it is time to use it to
retrieve some data from the SCC appliance. Good news is that
SteelScript-SCC comes with a comprehensive coverage of all resources
underneath the cmc.stats
service. One just needs to browse through
classes defined in the steelscript.scc.core
module to use the report class matching current needs. For example, in order to get
optimized bandwidth at different times of all devices associated with the SCC appliance,
BWTimeSeriesStatsReport
is the one to use.
>>> from steelscript.scc.core import BWTimeSeriesStatsReport
>>> import pprint
>>> report = BWTimeSeriesStatsReport(scc)
>>> report.run(timefilter="last 1 hour", traffic_type='optimized')
Note that timefilter
specifies the time range of the query and traffic_type
determines the type of traffic to query.
Now that the report has been run, we can fetch the data by accessing the data attribute:
>>> pprint.pprint(report.data)
[{u'data': [7308580.0, 16571400.0, 13216600.0, 68872900.0],
u'timestamp': 1440780000},
{u'data': [6002410.0, 23606000.0, 10935900.0, 52749800.0],
u'timestamp': 1440780300},
{u'data': [4056250.0, 16865900.0, 6394300.0, 37789200.0],
u'timestamp': 1440780600},
{u'data': [5850490.0, 44258800.0, 11690500.0, 104962000.0],
u'timestamp': 1440780900},
{u'data': [7468290.0, 24188900.0, 12829400.0, 84234000.0],
u'timestamp': 1440781200},
{u'data': [13041800.0, 34822600.0, 17672900.0, 77343300.0],
u'timestamp': 1440781500},
{u'data': [182396000.0, 206378000.0, 195764000.0, 261148000.0],
u'timestamp': 1440781800},
{u'data': [178387000.0, 194976000.0, 199298000.0, 235883000.0],
u'timestamp': 1440782100},
{u'data': [177016000.0, 203324000.0, 190545000.0, 261889000.0],
u'timestamp': 1440782400},
{u'data': [187747000.0, 416022000.0, 197363000.0, 450196000.0],
u'timestamp': 1440782700},
{u'data': [151403000.0, 334982000.0, 216453000.0, 422683000.0],
u'timestamp': 1440783000},
{u'data': [159875000.0, 409043000.0, 190787000.0, 451655000.0],
u'timestamp': 1440783300}]
Extending the Example¶
As a last item to help get started with your own scripts, we will extend our example with command-line options.
Below is an example script with ability to accept command-line options and present data in a table-like format.
#!/usr/bin/env python
import pprint
from steelscript.scc.core.app import SCCApp
from steelscript.scc.core import BWTimeSeriesStatsReport
class BWTimeSeriesStatsReportApp(SCCApp):
traffic_types = ['optimized', 'passthrough']
def add_options(self, parser):
super(BWTimeSeriesStatsReportApp, self).add_options(parser)
parser.add_option(
'--timefilter', dest='timefilter', default='last 1 hour',
help='Time range to analyze (defaults to "last 1 hour") '
'other valid formats are: "4/21/13 4:00 to 4/21/13 5:00" '
'or "16:00:00 to 21:00:04.546"')
parser.add_option(
'--traffic_type', dest='traffic_type', default='optimized',
help='Type of traffic to query, either optimized or passthrough')
parser.add_option(
'--devices', dest='devices', default=None,
help='An array of devices being queried on. None implies all '
'devices. If multiple devices are queried on, the data points '
'are the sum across all the devices.')
parser.add_option('--port', dest='port', default=None)
def main(self):
report = BWTimeSeriesStatsReport(self.scc)
report.run(traffic_type=self.options.traffic_type,
timefilter=self.options.timefilter,
devices=self.options.devices,
port=self.options.port)
pprint.pprint(report.data)
if __name__ == '__main__':
BWTimeSeriesStatsReportApp().run()
Copy the above code into a new file, and now you can run the file to display the data.
> python myreport.py $hostname $access_code --devices $serial_numbers --traffic_type 'optimized' --timefilter 'last 1 hour'
[{u'data': [7308580.0, 16571400.0, 13216600.0, 68872900.0],
u'timestamp': 1440780000},
{u'data': [6002410.0, 23606000.0, 10935900.0, 52749800.0],
u'timestamp': 1440780300},
{u'data': [4056250.0, 16865900.0, 6394300.0, 37789200.0],
u'timestamp': 1440780600},
{u'data': [5850490.0, 44258800.0, 11690500.0, 104962000.0],
u'timestamp': 1440780900},
{u'data': [7468290.0, 24188900.0, 12829400.0, 84234000.0],
u'timestamp': 1440781200},
{u'data': [13041800.0, 34822600.0, 17672900.0, 77343300.0],
u'timestamp': 1440781500},
{u'data': [182396000.0, 206378000.0, 195764000.0, 261148000.0],
u'timestamp': 1440781800},
{u'data': [178387000.0, 194976000.0, 199298000.0, 235883000.0],
u'timestamp': 1440782100},
{u'data': [177016000.0, 203324000.0, 190545000.0, 261889000.0],
u'timestamp': 1440782400},
{u'data': [187747000.0, 416022000.0, 197363000.0, 450196000.0],
u'timestamp': 1440782700},
{u'data': [151403000.0, 334982000.0, 216453000.0, 422683000.0],
u'timestamp': 1440783000},
{u'data': [159875000.0, 409043000.0, 190787000.0, 451655000.0],
u'timestamp': 1440783300}]
Now let us walk through the above script in detail.
First we need to import some modules.
#!/usr/bin/env python
import pprint
from steelscript.scc.core.app import SCCApp
from steelscript.scc.core import BWTimeSeriesStatsReport
The first line is called a shebang, it tells the system that the script should
be executed using the program after ‘#!’. The SCCApp
is imported for ease
of writing scripts to generate reports for SCC. The
BWTimeSeriesStatsReport
is
imported to facilitate reporting data retrieved at resource bw_timeseries
, which
belongs to the cmc.stats
service on a SCC device.
class BWTimeSeriesStatsReportApp(SCCApp):
def add_options(self, parser):
super(BWTimeSeriesStatsReportApp, self).add_options(parser)
parser.add_option(
'--timefilter', dest='timefilter', default='last 1 hour',
help='Time range to analyze (defaults to "last 1 hour") '
'other valid formats are: "4/21/13 4:00 to 4/21/13 5:00" '
'or "16:00:00 to 21:00:04.546"')
parser.add_option(
'--traffic_type', dest='traffic_type', default='optimized',
help='Type of traffic to query, either optimized or passthrough')
parser.add_option(
'--devices', dest='devices', default=None,
help='An array of devices being queried on. None implies all '
'devices. If multiple devices are queried on, the data points '
'are the sum across all the devices.')
parser.add_option('--port', dest='port', default=None)
This section begins with definition of the BWTimeSeriesStatsReportApp
class,
which inherits from the class SCCApp
. The inheritence
saves work of adding hostname option as well as access code option, both of which
are required for fetching data from SCC device.
The add_options
method introduces options to the report, including time filter,
traffic type, devices and port. The help text for each option can be seen using the
‘–help’ option.
def main(self):
report = BWTimeSeriesStatsReport(self.scc)
report.run(traffic_type=self.options.traffic_type,
timefilter=self.options.timefilter,
devices=self.options.devices,
port=self.options.port)
pprint.pprint(report.data)
if __name__ == '__main__':
BWTimeSeriesStatsReportApp().run()
This is the main part of the script. The run
method of the
BWTimeSeriesStatsReport
class will execute its main
method. In the main
method, self.scc
represents
the SCC object, which has been created by SCCApp
class.
report.run
will use all the input options and retrieve data via the SCC object.
SteelScript SCC Sources¶
The SCC package offers a set of interfaces to control and work with a SteelCentral Controller appliance.
SCC
Objects¶
-
class
steelscript.scc.core.scc.
SCC
(host, port=None, auth=None)¶ This class is the main interface to interact with a SteelCentral Controller.
-
__init__
(host, port=None, auth=None)¶ Create an SCC object
-
ServiceDefLoader
Objects¶
-
class
steelscript.scc.core.scc.
ServiceDefLoader
¶ This class serves as the custom hook for service manager.
-
find_by_id
(id_)¶ This method generates service schema corresponding to the id
- Parameters
id – Service id specified in service definition file
-
find_by_name
(name, version, provider)¶ Method used to discover service from service map based upon service name, version and service provider
- Parameters
name – Name of the service
version – Version of service
provider – Name of service provider
-
SCCServerConnectionHook
Objects¶
SCCServiceManager
Objects¶
BaseSCCReport
Objects¶
-
class
steelscript.scc.core.report.
BaseSCCReport
(scc)¶ Base class for SCC reports, not directly used for creating report objects.
- Parameters
service – string, attr name of the service obj
resource – string, name of the resource
link – string, name of the link to retrieve data
data_key – string, key mapping to the data in response if None, then the entire reponse is desired
required_fields – list of fields required by the sub-report, excluding start_time and end_time.
non_required_fields – list of fields available to use but not required by the sub-report
-
__init__
(scc)¶ Initialize self. See help(type(self)) for accurate signature.
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
BaseStatsReport
Objects¶
-
class
steelscript.scc.core.report.
BaseStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseSCCReport
Base class for reports generated by scc.stats api, not directly used for creating reports objects. All report instances are derived based on sub-classes inheriting from this class.
-
__init__
(scc)¶ Initialize self. See help(type(self)) for accurate signature.
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
BWUsageStatsReport
Objects¶
-
class
steelscript.scc.core.report.
BWUsageStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return bandwidth usage
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
BWTimeSeriesStatsReport
Objects¶
-
class
steelscript.scc.core.report.
BWTimeSeriesStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return bandwidth timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
BWPerApplStatsReport
Objects¶
-
class
steelscript.scc.core.report.
BWPerApplStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the bandwidth per appliance data
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
ThroughputStatsReport
Objects¶
-
class
steelscript.scc.core.report.
ThroughputStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the peak/p95 throughput timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
ThroughputPerApplStatsReport
Objects¶
-
class
steelscript.scc.core.report.
ThroughputPerApplStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the throughput per appliance data
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
ConnectionHistoryStatsReport
Objects¶
-
class
steelscript.scc.core.report.
ConnectionHistoryStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the max/avg connection history timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SRDFStatsReport
Objects¶
-
class
steelscript.scc.core.report.
SRDFStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the regular/peak srdf timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
TCPMemoryPressureReport
Objects¶
-
class
steelscript.scc.core.report.
TCPMemoryPressureReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return regular/peak tcp memory pressure timesries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
MultiDevStatsReport
Objects¶
-
class
steelscript.scc.core.report.
MultiDevStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
ConnectionPoolingStatsReport
Objects¶
-
class
steelscript.scc.core.report.
ConnectionPoolingStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.MultiDevStatsReport
Report class to return the connection pooling timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
ConnectionForwardingStatsReport
Objects¶
-
class
steelscript.scc.core.report.
ConnectionForwardingStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.MultiDevStatsReport
Report class to return the connection forwrding timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
DNSUsageStatsReport
Objects¶
-
class
steelscript.scc.core.report.
DNSUsageStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.MultiDevStatsReport
Report class to return the dns usage timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
DNSCacheHitsStatsReport
Objects¶
-
class
steelscript.scc.core.report.
DNSCacheHitsStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.MultiDevStatsReport
Report class to return the dns cache hits timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
HTTPStatsReport
Objects¶
-
class
steelscript.scc.core.report.
HTTPStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.MultiDevStatsReport
Report class to return the http timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
NFSStatsReport
Objects¶
-
class
steelscript.scc.core.report.
NFSStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.MultiDevStatsReport
Report class to return the nfs timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SSLStatsReport
Objects¶
-
class
steelscript.scc.core.report.
SSLStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.MultiDevStatsReport
Report class to return the ssl timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
DiskLoadStatsReport
Objects¶
-
class
steelscript.scc.core.report.
DiskLoadStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.MultiDevStatsReport
Report class to return disk load timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SingleDevStatsReport
Objects¶
-
class
steelscript.scc.core.report.
SingleDevStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SDRAdaptiveStatsReport
Objects¶
-
class
steelscript.scc.core.report.
SDRAdaptiveStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.SingleDevStatsReport
Report class to return the SDR Adaptive timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
MemoryPagingStatsReport
Objects¶
-
class
steelscript.scc.core.report.
MemoryPagingStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.SingleDevStatsReport
Report class to return the memory paging timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
CpuUtilizationStatsReport
Objects¶
-
class
steelscript.scc.core.report.
CpuUtilizationStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.SingleDevStatsReport
Report class to return the cpu utilization timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
PFSStatsReport
Objects¶
-
class
steelscript.scc.core.report.
PFSStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.SingleDevStatsReport
Report class to return the pfs timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
QoSStatsReport
Objects¶
-
class
steelscript.scc.core.report.
QoSStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the outbound/inbound qos timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SnapMirrorStatsReport
Objects¶
-
class
steelscript.scc.core.report.
SnapMirrorStatsReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return regular/peak snapmirror timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SteelFusionLUNIOReport
Objects¶
-
class
steelscript.scc.core.report.
SteelFusionLUNIOReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the SteelFusion lun io timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SteelFusionInitiatorIOReport
Objects¶
-
class
steelscript.scc.core.report.
SteelFusionInitiatorIOReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the SteelFusion initiator io timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SteelFusionNetworkIOReport
Objects¶
-
class
steelscript.scc.core.report.
SteelFusionNetworkIOReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the SteelFusion network IO timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SteelFusionBlockstoreReport
Objects¶
-
class
steelscript.scc.core.report.
SteelFusionBlockstoreReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseStatsReport
Report class to return the SteelFusion blockstore timeseries
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
BaseApplInvtReport
Objects¶
-
class
steelscript.scc.core.report.
BaseApplInvtReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseSCCReport
Base class for reports generated by appliance_inventory api, not directly used for creating reports objects. All report instances are derived based on sub-classes inheriting from this class.
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
AppliancesReport
Objects¶
-
class
steelscript.scc.core.report.
AppliancesReport
(scc)¶ Bases:
steelscript.scc.core.report.BaseApplInvtReport
Report class to return brief info of appliances
-
run
(**kwargs)¶ Run report to fetch data from the SCC device
-
SCCApp
Objects¶
-
class
steelscript.scc.core.app.
SCCApp
(*args, **kwargs)¶ Class to wrap common command line parsing
-
__init__
(*args, **kwargs)¶ Initialize self. See help(type(self)) for accurate signature.
-
setup
()¶ Commands to run before execution.
If defined in a subclass, the subclass will mostly want to call setup() of the parent via:
super(<subclass>, self).setup()
This will ensure the any setup required of the parent classes is performed as well.
-
validate_args
()¶ Hook for subclasses to add their own option/argument validation
-
SteelScript Command Line¶
SteelScript Command Line¶
Python modules for interacting with different transport types, such as telnet and ssh. The repo also contains modules for common parsing of command line responses.
Documentation available in this module:
Class Reference
SteelScript Command Line Sources¶
CLI
Objects¶
-
class
steelscript.cmdline.cli.__init__.
CLI
(hostname=None, username='admin', password=None, private_key_path=None, terminal='console', prompt=None, port=None, machine_name=None, machine_manager_uri='qemu:///system', channel_class=<class 'steelscript.cmdline.sshchannel.SSHChannel'>, **channel_args)¶ Base class CLI implementation for Network Devices
Vendor specific CLIs can inherit from this base class. This class by itself will try to work on a generic CLI if vendor specific class is not present.
For the “should match any prompt” regular expressions, the focus was mostly on common OSes in Riverbed’s internal environment. Other systems may require subclassing this class and overriding the prompt regexes.
- Parameters
host – host/ip
user – username to log in with
password (string) – password to log in with
private_key_path (string) – absolute path to RSA private key, used instead of password
terminal (string) – terminal emulation to use; default to ‘console’
prompt (regex pattern) – A prompt to match. Defaults to
CLI_ANY_PROMPT
transport_type (string) – DEPRECATED (use
channel_class
): telnet or ssh, defaults to sshuser – DEPRECATED (use
username
)host – DEPRECATED (use
hostname
)channel_class (class) – Class object to instantiate for persistent communication. Defaults to
steelscript.cmdline.sshchannel.SSHChannel
channel_args – additional
transport_type
-dependent arguments, passed blindly to the transportstart
method.
-
CLI_ANY_PROMPT
= '(^|\\n|\\r)(\\[?\\S+\\s?\\S+\\]?)(#|\\$|>|~)(\\s)?$'¶ A regex that is suitable for most CLIs, root or regular user.
Note that this does not specifically check hostnames, which might lead to false positive matches.
-
CLI_ROOT_PROMPT
= '(^|\\n|\\r)(\\[?\\S+\\s?\\S+\\]?)(#)(\\s)?$'¶ A regex intended for use with POSIX prompts for root ending in ‘#’
-
CLI_START_PROMPT
= '(^|\\n|\\r)(\\[?\\S+\\s?\\S+\\]?)(#|\\$|>|~)(\\s)?$'¶ A regex suitable for most initial CLI prompts, root or non-root
-
exec_command
(command, timeout=60, output_expected=None, prompt=None)¶ Executes the given command.
This method handles detecting simple boolean conditions such as the presence of output or errors.
- Parameters
command – command to execute, newline appended automatically
timeout – maximum time, in seconds, to wait for the command to finish. 0 to wait forever.
output_expected (bool or None) – If not None, indicates whether output is expected (True) or no output is expected (False). If the opposite occurs, raise UnexpectedOutput. Default is None.
prompt – Prompt regex for matching unusual prompts. This should almost never needed. This parameter is for unusual situations like an install config wizard.
- Returns
output of the command, minus the command itself.
- Raises
TypeError – if output_expected type is incorrect
CmdlineTimeout – on timeout
UnexpectedOutput – if output occurs when no output was expected, or no output occurs when output was expected
-
start
(start_prompt=None)¶ Initialize underlying channel.
- Parameters
start_prompt (regex pattern) – A non-default prompt to match, if any.
IOS_CLI
Objects¶
-
class
steelscript.cmdline.cli.ios_cli.
IOS_CLI
(hostname=None, username='admin', password=None, private_key_path=None, terminal='console', prompt=None, port=None, machine_name=None, machine_manager_uri='qemu:///system', channel_class=<class 'steelscript.cmdline.sshchannel.SSHChannel'>, **channel_args)¶ Bases:
steelscript.cmdline.cli.CLI
Implementation of a CLI for IOS devices.
-
current_cli_mode
()¶ Determine the current mode of the CLI.
Sends a newline and checks which prompt pattern matches.
- Returns
current CLI mode.
- Raises
UnknownCLIMode – if the current mode could not be detected.
-
enter_mode
(mode='configure', interface=None)¶ Enter mode based on mode string (‘normal’, ‘enable’, or ‘configure’).
- Parameters
mode – The CLI mode to enter. It must be ‘normal’, ‘enable’, or ‘configure’
interface – If entering sub-if mode, interface to enter
- Raises
UnknownCLIMode – if mode is not “normal”, “enable”, or “configure”
-
enter_mode_config
()¶ Puts the CLI into config mode, if it is not there already.
- Raises
UnknownCLIMode – if mode is not “normal”, “enable”, or “configure”
-
enter_mode_enable
()¶ Puts the CLI into enable mode.
Note this will go ‘backwards’ if needed (e.g., exiting config mode)
- Raises
UnknownCLIMode – if mode is not “normal”, “enable”, or “configure”
-
enter_mode_normal
()¶ Puts the CLI into the ‘normal’ mode (its initial state).
Note this will go ‘backwards’ if needed (e.g., exiting config mode)
- Raises
UnknownCLIMode – if mode is not “normal”, “enable”, or “configure”
-
enter_mode_subif
(interface)¶ Puts the CLI into sub-interface mode, if it is not there already.
-
exec_command
(command, timeout=60, mode='configure', output_expected=None, error_expected=False, interface=None, prompt=None)¶ Executes the given command.
This method handles detecting simple boolean conditions such as the presence of output or errors.
- Parameters
command – command to execute, newline appended automatically
timeout – maximum time, in seconds, to wait for the command to finish. 0 to wait forever.
mode – mode to enter before running the command. To skip this step and execute directly in the cli’s current mode, explicitly set this parameter to None. The default is “configure”
output_expected (bool or None) – If not None, indicates whether output is expected (True) or no output is expected (False). If the opposite occurs, raise UnexpectedOutput. Default is None.
error_expected (bool) – If true, cli error output (with a leading ‘%’) is expected and will be returned as regular output instead of raising a CLIError. Default is False, and error_expected always overrides output_expected.
interface (string) – if mode ‘subif’, interface to configure ‘gi0/1.666’ or ‘vlan 691’
prompt – Prompt regex for matching unusual prompts. This should almost never be used as the
mode
parameter automatically handles all typical cases. This parameter is for unusual situations like the install config wizard.
- Raises
CmdlineTimeout – on timeout
CLIError – if the output matches the cli’s error format, and error output was not expected.
UnexpectedOutput – if output occurs when no output was expected, or no output occurs when output was expected
- Returns
output of the command, minus the command itself.
-
start
()¶ Initialize underlying channel.
-
RVBD_CLI
Objects¶
-
class
steelscript.cmdline.cli.rvbd_cli.
RVBD_CLI
(hostname=None, username='admin', password=None, private_key_path=None, terminal='console', prompt=None, port=None, machine_name=None, machine_manager_uri='qemu:///system', channel_class=<class 'steelscript.cmdline.sshchannel.SSHChannel'>, **channel_args)¶ Bases:
steelscript.cmdline.cli.CLI
Implementation of a CLI for a Riverbed appliances.
-
current_cli_mode
()¶ Determine the current mode of the CLI.
Sends a newline and checks which prompt pattern matches.
- Returns
current CLI mode.
- Raises
UnknownCLIMode – if the current mode could not be detected.
-
property
default_mode
¶ The default mode that exec_command issues commands in.
-
enter_mode
(mode='enable', reinit=True)¶ Enter mode based on name (‘normal’, ‘enable’, ‘configure’, or ‘shell’).
- Parameters
mode – The CLI mode to enter. It must be ‘normal’, ‘enable’, or ‘configure’. Use
CLIMode
values.reinit – bool should this function attempt to repair connection.
- Raises
UnknownCLIMode – if mode is not “normal”, “enable”, “configure”, or “shell”.
CLINotRunning – if the CLI is not running.
-
enter_mode_config
()¶ Puts the CLI into config mode, if it is not there already.
- Raises
CLINotRunning – if the shell is not in the CLI; current thinking is this indicates the CLI has crashed/exited, and it is better to open a new CLI than have this one log back in and potentially hide an error.
-
enter_mode_enable
()¶ Puts the CLI into enable mode.
Note this will go ‘backwards’ if needed (e.g., exiting config mode)
- Raises
CLINotRunning – if the shell is not in the CLI; current thinking is this indicates the CLI has crashed/exited, and it is better to open a new CliChannel than have this one log back in and potentially hide an error.
-
enter_mode_normal
()¶ Puts the CLI into the ‘normal’ mode (its initial state).
Note this will go ‘backwards’ if needed (e.g., exiting config mode)
- Raises
CLINotRunning – if the shell is not in the CLI; current thinking is this indicates the CLI has crashed/exited, and it is better to open a new CliChannel than have this one log back in and potentially hide an error.
-
enter_mode_shell
()¶ Exits the CLI into shell mode.
This is a one-way transition and and you will need to start a new CLI object to get back.
-
exec_command
(command, timeout=60, mode='', output_expected=None, error_expected=False, prompt=None)¶ Executes the given command.
This method handles detecting simple boolean conditions such as the presence of output or errors.
- Parameters
command – command to execute, newline appended automatically
timeout – maximum time, in seconds, to wait for the command to finish. 0 to wait forever.
mode – mode to enter before running the command. The default is
default_mode()
. To skip this step and execute directly in the cli’s current mode, explicitly set this parameter to None.output_expected (bool or None) – If not None, indicates whether output is expected (True) or no output is expected (False). If the opposite occurs, raise UnexpectedOutput. Default is None.
error_expected (bool) – If true, cli error output (with a leading ‘%’) is expected and will be returned as regular output instead of raising a CLIError. Default is False, and error_expected always overrides output_expected.
prompt – Prompt regex for matching unusual prompts. This should almost never be used as the
mode
parameter automatically handles all typical cases. This parameter is for unusual situations like the install config wizard.
- Returns
output of the command, minus the command itself.
- Raises
CmdlineTimeout – on timeout
CLIError – if the output matches the cli’s error format, and error output was not expected.
UnexpectedOutput – if output occurs when no output was expected, or no output occurs when output was expected
-
get_sub_commands
(root_cmd)¶ Gets a list of commands at the current mode.
It sends root_cmd with ? and returns everything that is a command. This strips out things in <>’s, or other free-form fields the user has to enter.
- Parameters
root_cmd – root of the command to get subcommands for
- Returns
a list of the full paths to subcommands. eg, if root_cmd is “web ?”, this returns:
['web autologout', 'web auto-refresh', ...]
-
start
(start_prompt=None, run_cli=True)¶ Initialize the underlying channel, disable paging
- Parameters
start_prompt – Allows overriding the standard initial match for any reasonable CLI prompt to expect a specific mode or handle an unusual situation such as the install wizard.
run_cli – If True (the default), automatically launch the cli and disable paging. This can be set to false to handle situations such as installation where the cli is launched differently. The CLI will be running in normal mode.
-
VyattaCLI
Objects¶
-
class
steelscript.cmdline.cli.vyatta_cli.
VyattaCLI
(hostname=None, username='admin', password=None, private_key_path=None, terminal='console', prompt=None, port=None, machine_name=None, machine_manager_uri='qemu:///system', channel_class=<class 'steelscript.cmdline.sshchannel.SSHChannel'>, **channel_args)¶ Bases:
steelscript.cmdline.cli.CLI
Provides an interface to interact with the CLI of a vyatta router
-
current_cli_mode
()¶ Determine the current mode of the CLI.
This is done by sending newline and check which prompt pattern matches.
- Returns
current CLI mode.
- Raises
UnknownCLIMode – if the current mode could not be detected.
-
enter_mode
(mode='configure', force=False)¶ Enter the mode based on mode string (‘normal’,’config’).
- Parameters
mode (string) – The CLI mode to enter. It must be ‘normal’, ‘enable’, or ‘configure’
force (Boolean) – Discard commits and force enter requested mode
- Raises
UnknownCLIMode – if mode is not “normal”, “configure”
-
enter_mode_config
()¶ Puts the CLI into config mode, if it is not there already.
In this mode, you can make changes in the configuration.
- Raises
UnknownCLIMode – if mode is not “normal”, “configure”
-
enter_mode_normal
(force=False)¶ Puts the CLI into the ‘normal’ mode.
In this mode you can run commands, but you cannot change the configuration.
- Parameters
force (Boolean) – Will force enter ‘normal’ mode, discarding all changes that haven’t been committed.
- Raises
CLIError – if unable to go from “configure” mode to “normal” This happens if “commit” is not applied after config changes
UnknownCLIMode – if mode is not “normal” or “configure”
-
exec_command
(command, timeout=60, mode='configure', force=False, output_expected=None, prompt=None)¶ Executes the given command.
This method handles detecting simple boolean conditions such as the presence of output or errors.
- Parameters
command – command to execute, newline appended automatically
timeout – maximum time, in seconds, to wait for the command to finish. 0 to wait forever.
mode – mode to enter before running the command. To skip this step and execute directly in the cli’s current mode, explicitly set this parameter to None. The default is “configure”
force (Boolean) – Will force enter mode, discarding all changes that haven’t been committed.
output_expected (bool or None) – If not None, indicates whether output is expected (True) or no output is expected (False). If the opposite occurs, raise UnexpectedOutput. Default is None.
prompt – Prompt regex for matching unusual prompts. This should almost never be used as the
mode
parameter automatically handles all typical cases. This parameter is for unusual situations like the install config wizard.
- Returns
output of the command, minus the command itself.
- Raises
TypeError – if output_expected type is incorrect
CmdlineTimeout – on timeout
UnexpectedOutput – if output occurs when no output was expected, or no output occurs when output was expected
-
start
()¶ Initialize underlying channel.
Vyatta transport channel is presently configured to SSH only. There is no limitation for this, Vyatta could be configured for telnet as well, but that would involve additional config on Vyatta bring up during install. Ignoring for now.
-
Channel
Objects¶
-
class
steelscript.cmdline.channel.
Channel
¶ Abstract class to define common interface for a two communication channel.
-
abstract
expect
(match_res, timeout=60)¶ Waits for some text to be received that matches one or more regex patterns.
- Parameters
match_res – A list of regex pattern(s) to look for to be considered successful.
timeout – maximum time, in seconds, to wait for a regular expression match. 0 to wait forever.
- Returns
(output, re.MatchObject) where output is the output of the command (without the matched text), and MatchObject is a Python re.MatchObject containing data on what was matched.
You may use MatchObject.string[m.start():m.end()] to recover the actual matched text.
MatchObject.re.pattern will contain the pattern that matched, which will be one of the elements of match_res passed in.
-
fixup_carriage_returns
(data)¶ To work around all the different \r\n combos we are getting from the CLI, we normalize it as:
Eat consecutive \r’s (a\r\r\nb -> a\r\nb)
Convert \r\n’s to \n (a\r\nb -> a\nb)
Convert \n\r to \n (a\r\n\rb) -> (a\n\rb) -> (a\nb)
Convert single \r’s to \n, unless at end of strings (a\rb -> a\nb)
- #4 doesn’t trigger at the end of the line to cover partially received
data; the next character that comes in may be a \n, \r, etc.
- Parameters
data – string to convert
- Returns
the string data with the linefeeds converted into only \n’s
-
abstract
receive_all
()¶ Returns all text currently in the receive buffer, effectively flushing it.
- Returns
the text that was present in the receive queue, if any.
-
safe_line_feeds
(in_string)¶ - Parameters
in_string – string to replace linefeeds
- Returns
a string that has the linefeeds converted to ASCII representation for printing
-
abstract
send
(text_to_send)¶ Sends text to the channel immediately. Does not wait for any response.
- Parameters
text_to_send – Text to send, including command terminator(s) when applicable.
-
abstract
CmdlineException
Objects¶
-
class
steelscript.cmdline.exceptions.
CmdlineException
(command=None, output=None, _subclass_msg=None)¶ Base exception representing an error executing the command line.
- Parameters
command – The command that produced the error.
output – The output returned, possibly None.
- Variables
command – The command that produced the error.
output – The output returned. None if the command did not return.
CmdlineTimeout
Objects¶
-
class
steelscript.cmdline.exceptions.
CmdlineTimeout
(timeout, command=None, output=None, failed_match=None)¶ Bases:
steelscript.cmdline.exceptions.CmdlineException
Indicates a command was abandoned due to a timeout.
Some timeouts within a given protocol may be reported as ConnectionError as the third-party libraries are not always specific about causes. However, all timeouts triggered in SteelScript code will raise this exception.
- Parameters
timeout – The number of seconds that we were waiting for.
command – The command we were trying to execute.
output – Partial output received, if any.
failed_match (Match object, pattern object, or string.) – What we were trying to match, or None.
- Variables
command – The command we were trying to execute.
output – Partial output received, if any.
timeout – The number of seconds that we were waiting for.
failed_match_pattern – The pattern we were trying to match, if any.
-
with_traceback
()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
ConnectionError
Objects¶
-
class
steelscript.cmdline.exceptions.
ConnectionError
(command=None, output=None, cause=None, failed_match=None, context=None, _subclass_msg=None)¶ Bases:
steelscript.cmdline.exceptions.CmdlineException
Indicates a (probably) non-timeout error from the underlying protocol.
May contain a wrapped exception from a third-party library. In Python 3 this would be on the __cause__ attribute. The third-party library may not use a specific exception for timeouts, so certain kinds of timeouts may appear as a ConnectionError. Timeouts managed by SteelScript code should use CmdlineTimeout instead.
This exception should be used to propagate errors up to levels that should not be aware of the specific underlying protocol.
- Parameters
command – The command we were trying to execute.
output – Any output produced just before the failure.
cause – The protocol-specific exception, if any, that triggered this.
failed_match (Match object, pattern object, or string.) – What we were trying to match, or None.
context – An optional string describing the context of the error.
- Variables
command – The command we were trying to execute.
output – Any output produced just before the failure.
cause – The protocol-specific exception, if any, that triggered this.
failed_match_pattern – The pattern we were trying to match, if any.
-
with_traceback
()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
CLINotRunning
Objects¶
-
class
steelscript.cmdline.exceptions.
CLINotRunning
(output=None)¶ Bases:
steelscript.cmdline.exceptions.ConnectionError
Exception for when the CLI has crashed or could not be started.
- Parameters
output – Output of trying to start the CLI, or None if we expected the CLI to be there and it was not.
- Variables
output – Output of trying to start the CLI, or None if we expected the CLI to be there and it was not.
-
with_traceback
()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
CmdlineError
Objects¶
-
class
steelscript.cmdline.exceptions.
CmdlineError
(command=None, output=None, _subclass_msg=None)¶ Bases:
steelscript.cmdline.exceptions.CmdlineException
Base for command responses that specifically indicate an error.
See specific exceptions such as
ShellError
andCLIError
for additional debugging fields.-
with_traceback
()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
-
ShellError
Objects¶
-
class
steelscript.cmdline.exceptions.
ShellError
(command, exit_status, output=None)¶ Bases:
steelscript.cmdline.exceptions.CmdlineError
Exception representing a nonzero exit status from the shell.
Technically, representing an unexpected exit from the shell, as some command, such as diff, have successful nonzero exits.
- Parameters
command – The command that produced the error.
exit_status – The exit status of the command.
output – The output as returned by the shell, if any.
- Variables
command – The command that produced the error.
exit_status – The exit status of the command.
output – The output as returned by the shell, if any.
-
with_traceback
()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
CLIError
Objects¶
-
class
steelscript.cmdline.exceptions.
CLIError
(command, mode, output=None)¶ Bases:
steelscript.cmdline.exceptions.CmdlineError
Exception representing an error message from the CLI.
- Parameters
command – The command that produced the error.
mode – The CLI mode we were in when the error occurred.
output – The error string as returned by the CLI.
- Variables
command – The command that produced the error.
mode – The CLI mode we were in when the error occurred.
output – The error string as returned by the CLI.
-
with_traceback
()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
UnexpectedOutput
Objects¶
-
class
steelscript.cmdline.exceptions.
UnexpectedOutput
(command, output, expected_output=None, notes=None)¶ Bases:
steelscript.cmdline.exceptions.CmdlineException
Exception for when output does not match expectations.
This could include output where none was expected, no output where some was expected, or differing output than expected.
This generally does not mean easily detectable error output, which is indicated by the appropriate subclass of
CmdlineError
- Parameters
command – The command that produced the error.
output – The output as returned from the command, possibly None.
expected_output (String, possibly a regexp pattern.) – The output expected from the command, possibly None. If unspecified output was expected, set to True.
notes (List of strings) – Some extra information related with the error, possibly None.
- Variables
command – The command that produced the error.
output – The output as returned from the command.
expected_output – The output expected from the command, possibly None. If unspecified output was expected, set to True.
notes – Some extra information related with the error, possibly None.
-
with_traceback
()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
UnknownCLIMode
Objects¶
-
class
steelscript.cmdline.exceptions.
UnknownCLIMode
(prompt=None, mode=None)¶ Bases:
steelscript.cmdline.exceptions.CmdlineException
Exception for any CLI that sees or is asked for an unknown mode.
- Parameters
prompt – The prompt seen that cannot be mapped to a mode
mode – The mode that was requested but not recognized
- Variables
prompt – The prompt seen that cannot be mapped to a mode
mode – The mode that was requested but not recognized
-
with_traceback
()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
LibVirtChannel
Objects¶
-
class
steelscript.cmdline.libvirtchannel.
LibVirtChannel
(machine_name, machine_manager_uri='qemu:///system', username='root', password='', **kwargs)¶ Channel for connecting to a serial port via libvirt.
- Parameters
machine_name – The libvirt domain to which to connect.
machine_manager_uri – The hypervisor uri where the domain may be found. Defaults to a local qemu hypervisor.
user – username for authentication
password – password for authentication
-
expect
(match_res, timeout=300)¶ Matches regular expressions against singles lines in the stream.
Internally, this method works with bytes, but input and output are unicode as usual.
- Parameters
match_res – a list of regular expressions to match against the output.
timeout – Time to wait for matching data in the stream, in seconds. Note that the default timeout is longer than on most channels.
- Returns
(output, match_object)
where output is the output of the command (without the matched text), and match_object is a Pythonre.MatchObject
containing data on what was matched.You may use
MatchObject.string[m.start():m.end()]
to recover the actual matched text, which will be unicode.re.MatchObject.pattern
will contain the pattern that matched, which will be one of the elements of match_res passed in.- Raises
CmdlineTimeout – if no match found before timeout.
-
receive_all
()¶ Returns all text currently in the receive buffer, effectively flushing it.
- Returns
the text that was present in the receive queue, if any.
-
send
(text_to_send)¶ Sends text to the channel immediately. Does not wait for any response.
- Parameters
text_to_send (str) – Text to send, including command terminator(s) when applicable.
-
start
(match_res=('(^|\n|\r)([-a-zA-Z0-9_.]* )?# ', ), timeout=300)¶ Opens a console and logs in.
- Parameters
match_res – Pattern(s) of prompts to look for. May be a single regex string, or a list of them.
timeout – maximum time, in seconds, to wait for a regular expression match. 0 to wait forever.
- Returns
Python
re.MatchObject
containing data on what was matched.
Shell
Objects¶
-
class
steelscript.cmdline.shell.
Shell
(host, user='root', password='')¶ Class for running shell command remotely and statelessly.
No persistent channel is maintained, so changes to environment variables or other state will not be present for subsequent commands.
- Parameters
host – host/ip to ssh into
user – username to log in with
password – password to log in with
-
exec_command
(command, timeout=60, output_expected=None, error_expected=False, exit_info=None, retry_count=3, retry_delay=5, expect_output=None, expect_error=None)¶ Executes the given command statelessly.
Since this is stateless, an exec_command cannot use environment variables/directory changes/whatever from a previous exec_command.
This method handles detecting simple boolean conditions such as the presence of output or errors.
- Parameters
command – command to send
timeout – seconds to wait for command to finish. None to disable
output_expected (bool or None) – If not None, indicates whether output is expected (True) or no output is expected (False). If the opposite occurs, raise UnexpectedOutput. Default is None.
error_expected (bool) – If true, a nonzero exit status will not trigger an exception as it normally would. Default is False, and error_expected always overrides output_expected.
exit_info (dict or None) – If set to a dict, the exit status is added to the dictionary under the key ‘status’. Primarily used in conjunction with
error_expected
when multiple nonzero statuses are possible.retry_count (int) – the number of tries to reconnect if underlying connection is disconnected. Default is 3
retry_delay (int) – delay in seconds between each retry to connect. Default is 5
- Returns
output from the command
- Raises
ConnectionError – if the connection is lost
CmdlineTimeout – on timeout
ShellError – on an unexpected nonzero exit status
UnexpectedOutput – if output occurs when no output was expected, or no output occurs when output was expected
SSHChannel
Objects¶
-
class
steelscript.cmdline.sshchannel.
SSHChannel
(hostname, username, password=None, private_key_path=None, port=22, terminal='console', width=80, height=24, **kwargs)¶ Two-way SSH channel that allows sending and receiving data.
- Parameters
hostname (string) – hostname, fqdn, or ip address of the target system.
port – optional port for the connection. Default is 22.
username – account to use for authentication
password – password for authentication
private_key_path – absolute system path to private key file
terminal – terminal emulation to use; defaults to ‘console’
width – width (in characters) of the terminal screen; defaults to 80
height – height (in characters) of the terminal screen; defaults to 24
Both password and private_key_path may be passed, but private keys will take precedence for authentication, with no fallback to password attempt.
Additional arguments are accepted and ignored for compatibility with other channel implementations.
-
expect
(match_res, timeout=60)¶ Waits for text to be received that matches one or more regex patterns.
Note that data may have been received before this call and is waiting in the buffer; you may want to call receive_all() to flush the receive buffer before calling send() and call this function to match the output from your send() only.
- Parameters
match_res – Pattern(s) to look for to be considered successful. May be a single regex string, or a list of them. Currently cannot match multiple lines.
timeout – maximum time, in seconds, to wait for a regular expression match. 0 to wait forever.
- Returns
(output, match_object)
where output is the output of the command (without the matched text), and match_object is a Pythonre.MatchObject
containing data on what was matched.You may use
MatchObject.string[m.start():m.end()]
to recover the actual matched text, which will be unicode.re.MatchObject.pattern
will contain the pattern that matched, which will be one of the elements of match_res passed in.- Raises
CmdlineTimeout – if no match found before timeout.
ConnectionError – if the channel is closed.
-
receive_all
()¶ Flushes the receive buffer, returning all text that was in it.
- Returns
the text that was present in the receive queue, if any.
-
send
(text_to_send)¶ Sends text to the channel immediately. Does not wait for any response.
- Parameters
text_to_send – Text to send, may be an empty string.
-
start
(match_res=None, timeout=60)¶ Start an interactive ssh session and logs in.
- Parameters
match_res – Pattern(s) of prompts to look for. May be a single regex string, or a list of them.
timeout – maximum time, in seconds, to wait for a regular expression match. 0 to wait forever.
- Returns
Python
re.MatchObject
containing data on what was matched.
SSHProcess
Objects¶
-
class
steelscript.cmdline.sshprocess.
SSHProcess
(host, user='root', password=None, private_key=None, port=22)¶ SSH transport class to handle ssh connection setup.
- Parameters
host – host/ip to ssh into
user – username to log in with
password – password to log in with
private_key – paramiko private key (Pkey) object
If a private_key is passed, it will take precendence over a password, no fallback attempt will be made if the private key connection fails, however.
-
connect
()¶ Connects to the host and logs in.
- Raises
ConnectionError – on error
-
disconnect
()¶ Disconnects from the host
-
is_connected
()¶ Check whether SSH connection is established or not.
- Returns
True if it is connected; returns False otherwise.
-
open_interactive_channel
(term='console', width=80, height=24)¶ Creates and starts a stateful interactive channel.
This should be used whenever the channel must remain open between commands for interactive processing, or when a terminal/tty is necessary; e.g., CLIs with modes.
- Parameters
term – terminal type to emulate; defaults to ‘console’
width – width (in characters) of the terminal screen; defaults to 80
height – height (in characters) of the terminal screen; defaults to 24
- Returns
A Paramiko channel that communicate with the remote end in a stateful way.
- Raises
ConnectionError – if the SSH connection has not yet been established.
TelnetChannel
Objects¶
-
class
steelscript.cmdline.telnetchannel.
TelnetChannel
(hostname, username='root', password='', port=23, **kwargs)¶ Two-way telnet channel that allows sending and receiving data.
Accepts and ignores additional parameters for compatibility with other channel construction interfaces.
- Parameters
hostname – host/ip to telnet into
username – username to log in with
password – password to log in with
port – telnet port, defaults to 23
-
expect
(match_res, timeout=60)¶ Waits for some text to be received that matches one or more regex patterns.
Note that data may have been received before this call and is waiting in the buffer; you may want to call receive_all() to flush the receive buffer before calling send() and call this function to match the output from your send() only.
- Parameters
match_res – Pattern(s) to look for to be considered successful. May be a single regex string, or a list of them.
timeout – maximum time, in seconds, to wait for a regular expression match. 0 to wait forever.
- Returns
(output, match_object)
where output is the output of the command (without the matched text), and match_object is a Pythonre.MatchObject
containing data on what was matched.You may use
MatchObject.string[m.start():m.end()]
to recover the actual matched text, which will be unicode.re.MatchObject.pattern
will contain the pattern that matched, which will be one of the elements of match_res passed in.- Raises
CmdlineTimeout – if no match found before timeout.
-
receive_all
()¶ Flushes the receive buffer, returning all text that was in it.
- Returns
the text that was present in the receive queue, if any.
-
send
(text_to_send)¶ Sends text to the channel immediately. Does not wait for any response.
- Parameters
text_to_send – Text to send, may be an empty string.
-
start
(match_res=None, timeout=15)¶ Start telnet session and log it in
- Parameters
match_res – Pattern(s) of prompts to look for. May be a single regex string, or a list of them.
timeout – maximum time, in seconds, to wait for a regular expression match. 0 to wait forever.
- Returns
Python re.MatchObject containing data on what was matched.
Transport
Objects¶
-
class
steelscript.cmdline.transport.
Transport
¶ Abstract class to define common interfaces for a transport.
A transport is used by Cli/Shell object to handle connection setup.
-
abstract
connect
()¶ Abstract method to start a connection
-
abstract
disconnect
()¶ Abstract method to tear down current connection
-
abstract
is_connected
()¶ Check whether a connection is established or not.
- Returns
True if it is connected; returns False otherwise.
-
abstract
Parsers¶
Functions for parsing command line responses
-
steelscript.cmdline.parsers.
cli_parse_basic
(input_string)¶ Standard cli parsers for
key: value
styleThis parser goes through all the lines in the input string and returns a dictionary of parsed output. In addition to splitting the output into key value pairs, the values will be fed through parse_boolean to turn strings such as
yes
andtrue
into boolean objects, leaving other strings alone.This function will parse cli commands such as:
hw1-int1 (config) # show load balance fair-peer-v2 Fair peering V2: yes Threshold: 15 %
creating a dictionary:
{ 'fair peering v2': True, 'threshold' : '15 %' }
For this example one would want to perform further manipulation on the dictionary to get it into a usable state, changing
fair peering v2
toenabled
and15 %
to15
for threshold. Seeenable_squash()
for part of this.- Parameters
input_string – A string of CLI output to be parsed
- Returns
a dictionary of parsed output
-
steelscript.cmdline.parsers.
cli_parse_table
(input_string, headers)¶ Parser for Generic Table style output. More complex tables outputs may require a custom parser.
Parses output such as:
Destination Mask Gateway Interface 10.3.0.0 255.255.248.0 0.0.0.0 aux default 0.0.0.0 10.3.0.1
The left/right bounds of each data field are expected to fall underneath exactly 1 header. If a data item falls under none or more than 1, an error will be raised.
Data fields are initially divided by 2 spaces. This allows single spaces within the data fields. However, if the data crosses more than 1 header, it is then divided by single spaces and each piece will be part of whatever header it falls under. If any part doesn’t fall underneath a header, an error is raised.
The example output above would produce the following structure:
[ { destination: 10.3.0.0, mask: 255.255.248.0, gateway: 0.0.0.0, interface: aux }, { destination: default, mask: 0.0.0.0, gateway: 10.3.0.1 }, ]
- Parameters
input_string – A string of CLI output to be parsed
headers – array of headers in-order starting from the left.
- Returns
an array of dictionaries of parsed output
-
steelscript.cmdline.parsers.
check_numeric
(value_string)¶ This function tries to determine if a string would be better represented with a numeric type, either int or float. If neither works, for example
10 Mb
, it will simply return the same string provided- Parameters
value_string – input string to be parsed.
- Returns
the input string, an int, or a float
-
steelscript.cmdline.parsers.
enable_squash
(input)¶ Convert long specific enable strings to ‘enabled’
Takes in a dictionary of parsed output, iterates over the keys and looks for key names containing the string “enabled” at the end of the key name. Specifically the end of the key name is matched for safety. Replaces the key with simply “enabled”, for example an input dictionary:
{"Path-selection enabled": False}
becomes:
{"enabled": False}
- Parameters
input – A dictionary of parsed output
- Return result
A dictionary with keys ending in “enabled” replaced with just “enabled”
-
steelscript.cmdline.parsers.
parse_boolean
(value_string)¶ Determine the boolean value of the input string.
“yes”, “no”, “true” and “false” are recognized (case-insensitive).
- Parameters
value_string – input string to be parsed.
- Returns
boolean value based on input string
- Raises
ValueError – if the string is not recognized as a boolean
-
steelscript.cmdline.parsers.
restart_required
(input)¶ Take result from a cli command and check if a service restart is required. Return True if cli result indicates restart required
- Parameters
input – result from a cli command
- Return type
bool
-
steelscript.cmdline.parsers.
reboot_required
(input)¶ Take result from a cli command and check if a reboot is required. Return True if cli result indicates reboot required
- Parameters
input – result from a cli command
- Return type
bool
-
steelscript.cmdline.parsers.
parse_ip_and_port
(input)¶ Parse IP and Port number combo to a dictionary:
1.1.1.1:2000
to:
{'ip': IPv4Address('1.1.1.1'), 'port': 2000}
- Parameters
input (string) – IP and port
- Returns
dictionary with keys
ip
andport
-
steelscript.cmdline.parsers.
parse_url_to_host_port_protocol
(input)¶ Parse url to a dictionary using
urlparse.urlparse()
, inferring the port from the scheme (a.k.a. protocol):http://blah.com
becomes:
{'host': 'blah.com', 'port': 80, 'protocol': 'http'}
- Parameters
input (string) – url
- Returns
dict with port always specified
-
steelscript.cmdline.parsers.
parse_saasinfo_data
(input)¶ Parse saasinfo data to a dictionary contained ip, port and geodns mapping data structures:
================================= SaaS Application ================================= SAMPLEAPP ================================= SaaS IP ================================= 10.41.222.0/24 [0-65535] 111.221.112.0/21 [1-65535] 111.221.116.0/24 [1-65535] 111.221.17.160/27 [1-65535] 111.221.20.128/25 [0-65535] ================================= SaaS Hostname ================================= *.mail.apac.example.com *.example1.com *.example2.com example1.com ================================= GeoDNS ================================= --------------------------------- MBX Region --------------------------------- blu nam.ca.bay-area apc nam.ca.bay-area xyz nam.ca.bay-area abc nam.tx.san-antonio --------------------------------- Regional IPs --------------------------------- nam.ca.bay-area 132.245.80.146 132.245.80.150 nam.tx.san-antonio 132.245.80.153 132.245.80.156 132.245.81.114
to:
{ 'appid': 'SAMPLEAPP', 'ip': [ '10.41.222.0/24 [0-65535]', '111.221.112.0/21 [1-65535]', '111.221.116.0/24 [1-65535]', '111.221.17.160/27 [1-65535]', '111.221.20.128/25 [0-65535]', ], 'host': [ '*.mail.apac.example.com', '*.example1.com', '*.example2.com', 'example1.com', ], 'geodns': { 'nam.ca.bay-area': { 'mbx': ['blu', 'apc', 'xyz'], 'ip': ['132.245.80.146', '132.245.80.150'], }, 'nam.tx.san-antonio': { 'mbx': ['abc'], 'ip': [ '132.245.80.153', '132.245.80.156', '132.245.81.114', ] }, }, }
- Parameters
input (string) – CLI output of saasinfo data
- Returns
dictionary with saasinfo data as above