pyMez Tour

A tour of some of the features of pyMez


Importing pyMez after you have installed it.

There are many ways to import pyMez, but to get the base api it is traditional to use

from pyMez import *

If a summary of the modules imported appears and a time taken to import them the variables in pyMez/__init__.py

VERBOSE_IMPORT=True
TIMED_IMPORT=True

are set to True. To remove this diagnostic output set these two variables to be False.

Other ways

However, if a specific function or class is required then direct import also works (this form imports the full API)

from pyMez.Code.DataHandlers.XMLModels import XMLBase

If you want to have the import skip the API, then add the pyMez folder to sys.path and then import beginning with Code

import sys
sys.path.append(r"C:\ProgramData\Anaconda2\Lib\site-packages\pyMez")
from Code.DataHandlers.XMLModels import XMLBase

In [1]:
# Here is the import statement for the base api. I have tried to include the most common things and exclude any slow loading 
# modules.
from pyMez import *
Importing pyMez, this should take roughly 30 seconds
Importing Code.DataHandlers.GeneralModels
It took 0.448 s to import Code.DataHandlers.GeneralModels
Importing Code.DataHandlers.HTMLModels
It took 0.104 s to import Code.DataHandlers.HTMLModels
Importing Code.DataHandlers.NISTModels
It took 1.191 s to import Code.DataHandlers.NISTModels
Importing Code.DataHandlers.TouchstoneModels
It took 0.004 s to import Code.DataHandlers.TouchstoneModels
Importing Code.DataHandlers.XMLModels
It took 0.053 s to import Code.DataHandlers.XMLModels
Importing Code.DataHandlers.ZipModels
It took 0.004 s to import Code.DataHandlers.ZipModels
Importing Code.InstrumentControl.Experiments
It took 0.411 s to import Code.InstrumentControl.Experiments
Importing Code.InstrumentControl.Instruments
It took 0.033 s to import Code.InstrumentControl.Instruments
Importing Code.Utils.Names
It took 0.006 s to import Code.Utils.Names
It took 2.255 s to import all of the active modules

In [2]:
# That is by far not all of the available modules. To get a listing of all of them print the constant API_MODULES
keys=sorted(API_MODULES.keys())
for key in keys:
    print("{0}:{1}".format(key,API_MODULES[key]))
Code.Analysis.Fitting:False
Code.Analysis.GeneralAnalysis:False
Code.Analysis.Interpolation:False
Code.Analysis.NISTUncertainty:False
Code.Analysis.Reports:False
Code.Analysis.SParameter:False
Code.Analysis.Transformations:False
Code.Analysis.Uncertainty:False
Code.DataHandlers.AbstractDjangoModels:False
Code.DataHandlers.GeneralModels:True
Code.DataHandlers.GraphModels:False
Code.DataHandlers.HTMLModels:True
Code.DataHandlers.MUFModels:False
Code.DataHandlers.NISTModels:True
Code.DataHandlers.RadiCALModels:False
Code.DataHandlers.StatistiCALModels:False
Code.DataHandlers.TouchstoneModels:True
Code.DataHandlers.Translations:False
Code.DataHandlers.XMLModels:True
Code.DataHandlers.ZipModels:True
Code.FrontEnds.AdvancedInterfaceFrame:False
Code.FrontEnds.BasicInterfaceFrame:False
Code.FrontEnds.EndOfDayDialog:False
Code.FrontEnds.GeneralInterfaceFrame:False
Code.FrontEnds.HTMLPanel:False
Code.FrontEnds.IEPanel:False
Code.FrontEnds.IPythonPanel:False
Code.FrontEnds.KeithleyIVPanel:False
Code.FrontEnds.MatplotlibWxPanel:False
Code.FrontEnds.ShellPanel:False
Code.FrontEnds.SimpleArbDBLowerInterfacePanel:False
Code.FrontEnds.SimpleLogLowerInterfacePanel:False
Code.FrontEnds.StyledTextCtrlPanel:False
Code.FrontEnds.VisaDialog:False
Code.FrontEnds.WxDialogFunctions:False
Code.FrontEnds.WxHTML2Panel:False
Code.FrontEnds.XMLEditPanel:False
Code.FrontEnds.XMLGeneral:False
Code.InstrumentControl.Experiments:True
Code.InstrumentControl.Instruments:True
Code.Utils.Alias:False
Code.Utils.DjangoUtils:False
Code.Utils.GetMetadata:False
Code.Utils.HPBasicUtils:False
Code.Utils.HelpUtils:False
Code.Utils.Names:True
Code.Utils.PerformanceUtils:False
Code.Utils.pyMezUnitTest:False

In [3]:
# The other modules that I tend to use a lot but import slowly are
# Simple fits
from pyMez.Code.Analysis.Fitting import *
# Interpolation
from pyMez.Code.Analysis.Interpolation import *
# Scattering parameter analysis
from pyMez.Code.Analysis.SParameter import *
# Data transformations mostly for sparameters / waveparameters
from pyMez.Code.Analysis.Transformations import *
# Data translations
from pyMez.Code.DataHandlers.Translations import *
# The meta-data structures based on directional graphs
from pyMez.Code.DataHandlers.GraphModels import *
# The Statistical and MUF wrappers
from pyMez.Code.DataHandlers.MUFModels import *
from pyMez.Code.DataHandlers.StatistiCALModels import *


DataHandling

There are many modules in the DataHandling subpackage. It is in the pyMez/Code/DataHandlers directory. The primary motivation of this subpackage is to create models for data manipulation and aggregation. The major data types of interest are ascii based tables, xml, html, touchstone files, and specialty models such as zip, statiscal, MUF. There are several major ideas:

  1. The most important data type in science is a data table with a header (metadata) and content is more important than format.
  2. XML and HTML is a very important way of reporting the data.
  3. There is no perfect data format, so transformations between the formats are important

In the history of computers being available in science there has been countless formats for data and each has its own merits, the best format is the one that accomplishes its goals for the particular task.


AsciiDataTable

For Ascii Tables we use the class pyMez.Code.DataHandlers.GeneralModels.AsciiDataTable

The basic structure is: <img src="./pyMez_Tour_Files/AsciiDataTable_Structure.png" width=50%/> It has many options and a lot ways of building it.

In [4]:
# AsciiDataTable is a data type that means to be self documenting and handle common data that is small and non-uniform
# It has places for meta data and changing formatting 
# to build it from scratch we can pass a options dictionary containing the data and column names and types
options={}
options["data"]=[[i+j for i in range(3)] for j in range(3) ]
options["column_types"]=["int","float","str"]
options["column_names"]=["a","b","c"]
options["header"]=["Data For Demonstatration","A 3x3 Matrix"]
options["footer"]=["Don't use footers they suck"]
options["metadata"]={"notes":"This is an example table"}
data_table=AsciiDataTable(**options)
In [5]:
# now we can acess the text version as a string using print
print(data_table)
Data For Demonstatration
A 3x3 Matrix
a,b,c
0,1.0,2
1,2.0,3
2,3.0,4
Don't use footers they suck
In [6]:
# we can save the data table and reopen it.
# the default location is the current working directory and an autonamed file
# it is in the atribute path
data_table.path
Out[6]:
'Data_Table_20190124_001.txt'
In [7]:
# now we have a meta data dictionary that follows the table around
data_table.metadata["notes"]
Out[7]:
'This is an example table'
In [8]:
# to saveas we just put the path we want into the save method
data_table.save(os.path.join(os.getcwd(),"pyMez_Tour_Files/test_data_table.txt"))
In [9]:
# to open it again we can use the class and the path
reopen=AsciiDataTable(os.path.join(os.getcwd(),"pyMez_Tour_Files/test_data_table.txt"))
In [10]:
print(reopen)
Data For Demonstatration
A 3x3 Matrix
a,b,c
0,1.0,2
1,2.0,3
2,3.0,4
Don't use footers they suck
In [11]:
# The metadata dictionary is still there
reopen.metadata["notes"]
Out[11]:
'This is an example table'
In [12]:
# now we can format the file anyway we want
reopen.options["column_names_begin_token"]="!"
reopen.options["data_delimiter"]="\t"
reopen.options["comment_begin"]="!"
reopen.options["comment_end"]=""
reopen.options["header_line_types"]=["normal","comment"]

print(reopen)
Data For Demonstatration
!A 3x3 Matrix
!a,b,c
0	1.0	2
1	2.0	3
2	3.0	4
!Don't use footers they suck
In [13]:
# I personally think footers are not the best idea so we can move the footer to the header
reopen.move_footer_to_header()
reopen.options["header_line_types"].append("comment")
print(reopen)
Data For Demonstatration
!A 3x3 Matrix
!Don't use footers they suck
!a,b,c
0	1.0	2
1	2.0	3
2	3.0	4
In [14]:
# now we can save and reopen again
reopen.save("./pyMez_Tour_Files/Reopen.txt")
In [15]:
rereopen=AsciiDataTable("./pyMez_Tour_Files/Reopen.txt")
Warning \n is in the remove tokens
In [16]:
print(rereopen)
Data For Demonstatration
!A 3x3 Matrix
!Don't use footers they suck
!a,b,c
0	1.0	2
1	2.0	3
2	3.0	4
In [17]:
# our metadata is still there becuase it is getting saved in the schema or reopen.options["metadata"]
rereopen.metadata
Out[17]:
{'notes': 'This is an example table'}
In [18]:
# it kind of works like a pandas data frame, but list is the fundamental data type and not a numpy array
reopen["a"]
Out[18]:
[0, 1, 2]
In [19]:
# this means when we add stuff it works like this
new_list=reopen["a"]+reopen["b"]+reopen["c"]
In [20]:
print(new_list)
[0, 1, 2, 1.0, 2.0, 3.0, '2', '3', '4']
In [21]:
# all the data is in the data attribute
rereopen.data
Out[21]:
[[0, 1.0, '2'], [1, 2.0, '3'], [2, 3.0, '4']]
In [22]:
# you can also get the data as a list of dictionaries
rereopen.get_data_dictionary_list()
Out[22]:
[{'a': '0', 'b': '1.0', 'c': '2'},
 {'a': '1', 'b': '2.0', 'c': '3'},
 {'a': '2', 'b': '3.0', 'c': '4'}]
In [23]:
# or get a row 
rereopen.get_row(2)
Out[23]:
[2, 3.0, '4']
In [24]:
# or get a the unique values in a column
rereopen.get_unique_column_values("a")
Out[24]:
[0, 1, 2]
In [25]:
# there is the ability to use a row formatter 
row_formatter="{0} Bannnas are {1:03.3f} but not {2}"
rereopen.options["row_formatter_string"]=row_formatter
print(rereopen)
Data For Demonstatration
!A 3x3 Matrix
!Don't use footers they suck
!a,b,c
0 Bannnas are 1.000 but not 2
1 Bannnas are 2.000 but not 3
2 Bannnas are 3.000 but not 4
In [26]:
# now to make a you own class inherit from AsciiDataTable and add special methods
class NewDataClass(AsciiDataTable):
    """Same as an ascii data but plots the first column using .show()"""
    def show(self):
        plt.plot(self.get_column(column_index=0))
        plt.show()
In [27]:
new_data=NewDataClass(os.path.join(os.getcwd(),"pyMez_Tour_Files/test_data_table.txt"))
In [28]:
new_data.show()
In [29]:
# If you do data analysis using pandas we can transform the AsciiDataTable to a pandas data frame, or if
# want to preserve the header a dictionary of pandas data frames
from pyMez.Code.DataHandlers.Translations import *
pandas_df=AsciiDataTable_to_DataFrame(data_table)
pandas_dictionary=AsciiDataTable_to_DataFrameDictionary(data_table)
In [30]:
pandas_df
Out[30]:
a b c
0 0 1.0 2
1 1 2.0 3
2 2 3.0 4
In [31]:
pandas_dictionary["Data"]
Out[31]:
a b c
0 0 1.0 2
1 1 2.0 3
2 2 3.0 4
In [32]:
pandas_dictionary["Header"]
Out[32]:
Header_Line_Content
0 Data For Demonstatration
1 A 3x3 Matrix
In [33]:
pandas_dictionary["Footer"]
Out[33]:
Footer_Line_Content
0 Don't use footers they suck


Touchstone Models

The touchstone formats are for scattering parameters of any number of ports. There are 3 basic classes S1PV1, S2PV2, SNP and they are in the module pyMez.Code.DataHandlers.TouchstoneModels. You can use the SNP class to open any touchstone file with 2 or more ports.

In [34]:
s1p=S1PV1("./pyMez_Tour_Files/load.s1p")
s2p=S2PV1("./pyMez_Tour_Files/thru.s2p")
s4p=SNP("./pyMez_Tour_Files/Solution_0.s4p")
In [35]:
# now the touchstone family has a format ("RI","MA","DB") 
# the class has a very similar interface as the AsciiDataTable
# Except all have a show method that plots the data
s1p.show();
In [36]:
s1p.column_names
Out[36]:
['Frequency', 'reS11', 'imS11']
In [37]:
s1p["reS11"]
Out[37]:
[-0.01049680327,
 -0.01052926564,
 -0.01056833871,
 -0.01061750133,
 -0.01069049455,
 -0.01080291858,
 -0.01114448419,
 -0.01124639209,
 -0.01135111716,
 -0.01144485541,
 -0.01152392736]
In [38]:
# because of the number of many port parameters they get clustered
s4p.show(display_legend=False);
In [39]:
# There are two attributes that contain the sparamters, one is .data that has it in tabular form
# the other is .sparamter_complex that stores it in complex numbers
print(s1p.sparameter_complex)
print(s1p.data)
[[950000000.0, (-0.01049680327+0.008134703699j)], [960000000.0, (-0.01052926564+0.00790323825j)], [970000000.0, (-0.01056833871+0.007654562961j)], [980000000.0, (-0.01061750133+0.007398920957j)], [990000000.0, (-0.01069049455+0.007140403558j)], [1000000000.0, (-0.01080291858+0.006857244594j)], [1010000000.0, (-0.01114448419+0.006146118651j)], [1020000000.0, (-0.01124639209+0.005872048007j)], [1030000000.0, (-0.01135111716+0.005636381121j)], [1040000000.0, (-0.01144485541+0.005420531738j)], [1050000000.0, (-0.01152392736+0.005215797579j)]]
[[950000000.0, -0.01049680327, 0.008134703699], [960000000.0, -0.01052926564, 0.00790323825], [970000000.0, -0.01056833871, 0.007654562961], [980000000.0, -0.01061750133, 0.007398920957], [990000000.0, -0.01069049455, 0.007140403558], [1000000000.0, -0.01080291858, 0.006857244594], [1010000000.0, -0.01114448419, 0.006146118651], [1020000000.0, -0.01124639209, 0.005872048007], [1030000000.0, -0.01135111716, 0.005636381121], [1040000000.0, -0.01144485541, 0.005420531738], [1050000000.0, -0.01152392736, 0.005215797579]]
In [40]:
#  to change the data format use 
s1p.change_data_format("MA")
print(s1p)
# GHz S MA R 50
950000000  0.0132799203  142.225403
960000000  0.01316535642  143.1082064
970000000  0.01304921903  144.0844872
980000000  0.01294122737  145.1288695
990000000  0.01285581723  146.2602148
1000000000  0.01279550129  147.5943138
1010000000  0.01272691252  151.1235182
1020000000  0.01268709119  152.4297108
1030000000  0.01267346255  153.5933685
1040000000  0.01266360453  154.6567499
1050000000  0.01264932592  155.6482117
In [41]:
s1p.change_data_format("RI")
print(s1p)
# GHz S RI R 50
950000000  -0.01049680327  0.008134703699
960000000  -0.01052926564  0.00790323825
970000000  -0.01056833871  0.007654562961
980000000  -0.01061750133  0.007398920957
990000000  -0.01069049455  0.007140403558
1000000000  -0.01080291858  0.006857244594
1010000000  -0.01114448419  0.006146118651
1020000000  -0.01124639209  0.005872048007
1030000000  -0.01135111716  0.005636381121
1040000000  -0.01144485541  0.005420531738
1050000000  -0.01152392736  0.005215797579


XML and HTML Models

XML and HTML are markup languages used extensively on the web. We use them for reporting and data storage. The modules pyMez.Code.DataHandlers.XMLModels and pyMez.Code.DataHandlers.HTMLModels have most of these models, they are loaded in the base API. However two analysis modules pyMez.Code.Analysis.Reports and pyMez.Code.Analysis.ProgramAnalysis contain related classes and functions. The folder pyMez/Code/DataHandlers/XSL has all of the style sheets.

In [42]:
# A simple xml log
xml_log=XMLLog()
In [43]:
# it can add an entry
xml_log.add_entry("This is an entry")
print(xml_log)
<Log><Entry Date="2019-01-25T00:43:00.230000" Index="1">This is an entry</Entry></Log>
In [44]:
xml_log.save("./pyMez_Tour_Files/log.xml")
In [45]:
xml_reopen_log=XMLLog("./pyMez_Tour_Files/log.xml")
In [46]:
print(xml_reopen_log)
<Log><Entry Date="2019-01-25T00:43:00.230000" Index="1">This is an entry</Entry></Log>
In [47]:
# This an xml data table created from the AsciiDataTable
xml_data_table=AsciiDataTable_to_XmlDataTable(data_table)
In [48]:
print(xml_data_table)
<Data_Table><Data_Description><notes>This is an example table</notes></Data_Description><Data><Tuple a="0" b="1.0" c="2"/><Tuple a="1" b="2.0" c="3"/><Tuple a="2" b="3.0" c="4"/></Data></Data_Table>
In [49]:
# now we can change this to html using a XSL
html_data_table=xml_data_table.to_HTML(os.path.join(TESTS_DIRECTORY,"../XSL/DEFAULT_MEASUREMENT_STYLE.xsl"))
In [50]:
print(html_data_table)
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<h3>Data Description:</h3><table><tr>
<td><b>notes :</b></td>
<td>This is an example table</td>
</tr></table><h3>Data:</h3><table border="2" bgcolor="white" cellpadding="1" bordercolor="black" bordercolorlight="black">
<tr>
<th bgcolor="silver"><b>a</b></th>
<th bgcolor="silver"><b>b</b></th>
<th bgcolor="silver"><b>c</b></th>
</tr>
<tr>
<td>0</td>
<td>1.0</td>
<td>2</td>
</tr>
<tr>
<td>1</td>
<td>2.0</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>3.0</td>
<td>4</td>
</tr>
</table>

In [51]:
# if we want to store it we can use save_HTML or load it into the HTMLBase class
html_data_table_2=HTMLBase(html_text=html_data_table)
In [52]:
html_data_table_2.show()
file://c:/users/sandersa/appdata/local/temp/1/tmpxo3sog.html
In [53]:
# or for a more interactive example, we can use a translation from s2p to xml and then to html and show it
xml_s2p=S2PV1_to_XmlDataTable(s2p,format="MA")
html_s2p=HTMLBase(html_text=xml_s2p.to_HTML(os.path.join(TESTS_DIRECTORY,"../XSL/S2P_MA_STYLE.xsl")))
html_s2p.show()
file://c:/users/sandersa/appdata/local/temp/1/tmptssjuj.html
In [54]:
# the HTMLReport Class is a descendant of HTMLBase but has the added ability to embed images
from pyMez.Code.Analysis.Reports import HTMLReport
In [55]:
html_report=HTMLReport(None,html_text=html_data_table)
In [56]:
# now we want to add the image to the report
html_report.embedd_image_figure(image=s2p.show(silent=True),image_mode="MatplotlibFigure",
                                caption="A Plot of Table Data",figure_id="Figure1")
html_report.show()
file://c:/users/sandersa/appdata/local/temp/1/tmpynztgz.html
In [57]:
help(html_report.embedd_image_figure)
Help on method embedd_image_figure in module pyMez.Code.Analysis.Reports:

embedd_image_figure(self, image, image_mode='MatplotlibFigure', figure_id='image', caption='', style='', **options) method of pyMez.Code.Analysis.Reports.HTMLReport instance
    Embedds an image in the report. image_mode can be  MatplotlibFigure (a reference to the figure class),
    Image (the PIL class),
    Base64 (a string of the values),
    Png, Jpg, Bmp Tiff(the file name),
    or a Ndarray of the image values. The image is in a <figure id=figure_id> tag

In [58]:
# we can also add elements one by one
html_report.append_to_body("<h1>Title</h1>")
In [59]:
html_report.show()
file://c:/users/sandersa/appdata/local/temp/1/tmpojfnpn.html
In [60]:
# or add a log
html_log=HTMLBase(html_text=xml_log.to_HTML(os.path.join(TESTS_DIRECTORY,"../XSL/DEFAULT_LOG_STYLE.xsl")))
html_report+html_log
html_report.show()
file://c:/users/sandersa/appdata/local/temp/1/tmpg6rtt0.html


Meta Models

Meta Models are based on a series of translations found in pyMez.Code.DataHandlers.Translations and the models are in pyMez.Code.DataHandlers.GraphModels. They are meant to be integrated into a Universal Data Translator.

In [61]:
# the modules are not loaded in the Base Api
from pyMez.Code.DataHandlers.GraphModels import *
In [62]:
# the meta models I use the most are TableGraph, MetadataGraph and ImageGraph
image_graph=ImageGraph()
In [63]:
%matplotlib inline
# for a visualization of the formats avaible to the meta model use the show attribute
# The green nodes are a one way path, use jump_to_external_node to reach them. The state of the graph will 
# then be left in the closest node. The blue node is the current node
plt.close()
image_graph.show()
Out[63]:
<matplotlib.figure.Figure at 0x38467e10>
<matplotlib.figure.Figure at 0x38467e10>
In [64]:
# The PIL Image class has a show method
image_pil=image_graph.data
image_pil.show()
In [65]:
%matplotlib wx
# This means that we can take data in any of these formats and turn it into any of the others
image_graph.move_to_node("MatplotlibFigure")
figure=image_graph.data
figure.show()
pil_image.mode is RGB
<matplotlib.figure.Figure at 0x38f7f128>
In [66]:
%matplotlib inline
# now the graph is at a different node
image_graph.show()
Out[66]:
<matplotlib.figure.Figure at 0x3983add8>
<matplotlib.figure.Figure at 0x3983add8>
In [67]:
metadata_graph=MetadataGraph()
In [68]:
%matplotlib inline
metadata_graph.show()
Out[68]:
<matplotlib.figure.Figure at 0x38236390>
<matplotlib.figure.Figure at 0x38236390>
In [69]:
# metadata is taken to be any key value pair. The most natural way to express this in python is a dictionary
metadata_dictionary={"Device":"42","Time":datetime.datetime.now(),"Notes":"A Test of metadata"}
metadata_graph.set_state(node_name="Dictionary",node_data=metadata_dictionary)
In [70]:
# now we can express this in any format in the graph
# for example we move to the AsciiDataTable node and print it
metadata_graph.move_to_node("AsciiDataTable")
meta_table=metadata_graph.data
print(meta_table)
Property,Value
Device,42
Notes,A Test of metadata
Time,2019-01-24 17:43:13.405000
In [71]:
# or in a pandas data frame
metadata_graph.move_to_node("DataFrame")
meta_df=metadata_graph.data
print(meta_df)
  Property                       Value
0   Device                          42
1    Notes          A Test of metadata
2     Time  2019-01-24 17:43:13.405000
In [72]:
# or in as html meta data 
metadata_graph.move_to_node("HtmlMetaString")
meta_df=metadata_graph.data
print(meta_df)
<meta name="Device" content="42" />
<meta name="Notes" content="A Test of metadata" />
<meta name="Time" content="2019-01-24 17:43:13.405000" />

In [73]:
# or as a header list
metadata_graph.move_to_node("HeaderList")
meta_df=metadata_graph.data
print(meta_df)
['Device=42', 'Notes=A Test of metadata', 'Time=2019-01-24 17:43:13.405000']
In [74]:
# or as xml fragment
metadata_graph.move_to_node("XmlString")
meta_df=metadata_graph.data
print(meta_df)
<Device>42</Device>
<Notes>A Test of metadata</Notes>
<Time>2019-01-24 17:43:13.405000</Time>

In [75]:
# or as json 
metadata_graph.move_to_node("JsonString")
meta_df=metadata_graph.data
print(meta_df)
{"Device": "42", "Notes": "A Test of metadata", "Time": "2019-01-24 17:43:13.405000"}
In [76]:
table_graph=TableGraph()
In [77]:
table_graph.show()
Out[77]:
<matplotlib.figure.Figure at 0x3b4afda0>
<matplotlib.figure.Figure at 0x3b4afda0>


Overview Analysis

The Analysis subpackage contains modules dedicated to the most common analysis tasks. It has an elaborate module for scattering parameters and more basic functionality for fits and other common tasks such as comparing tables with uncertainties.


Fitting and Interpolation

The fitting and interpolation modules allow for the creation of and manipulation of data.They are not in the base API and must be loaded separately. Fitting for functions of a single variable works using a sympy/scipy composite function.

In [78]:
from pyMez.Code.Analysis.Fitting import *
from pyMez.Code.Analysis.Interpolation import *
In [79]:
# Now say we want to create a table of data
time_list=np.linspace(0,5,1000)
sine_wave=FunctionalModel(variables=["t"],parameters=["A","phi"],equation="A*sin(2*pi*t+phi)")
f_list=[1,1.2,1.4]
multisine=Multicosine(f_list)
# Now we have some fucntions we can set the parameters and use them to plot
sine_wave.set_parameters({"A":1.0,"phi":0})
multisine.set_parameters({"A_1":1.,"A_2":.3,"A_3":.5,"phi_1":0,"phi_2":np.pi/2.,"phi_3":0})
plt.plot(time_list,sine_wave(time_list),label="Sine Wave")
plt.plot(time_list,multisine(time_list),label="Multisine Wave")
plt.legend()
plt.show()
In [80]:
# If we want synthetic data we can use a data simulator or just add random noise
sythetic_data=DataSimulator(model=multisine,output_noise_center=0.,output_noise_width=.1,output_noise_type="normal")
sythetic_data.set_parameters({"A_1":1.,"A_2":.3,"A_3":.5,"phi_1":0,"phi_2":np.pi/2.,"phi_3":0})
sythetic_data.set_x(0,5,1000)
sythetic_data.get_data()
plt.close()
plt.plot(time_list,sine_wave(time_list),label="Sine Wave")
plt.plot(time_list,sythetic_data.data,label="Synthetic Data")
plt.legend()
plt.show()
In [81]:
# now we can use the multisine to fit the synthetic data
multisine.fit_data(time_list,sythetic_data.data)
# the fit method just sets the parameter values to the least squares value
In [82]:
multisine.parameter_values
Out[82]:
{'A_1': 0.999681011282954,
 'A_2': 0.30027002738120456,
 'A_3': 0.49759480647792703,
 'phi_1': 0.0029180858341852287,
 'phi_2': -1.5570041333652083,
 'phi_3': 0.01296227270060869}
In [83]:
plt.close()
plt.plot(time_list,sythetic_data.data,label="Synthetic Data")
plt.plot(time_list,multisine(time_list),label="Multisine Fit ")
plt.legend()
plt.show()
In [84]:
# now say we want to build a table with the data
# we start with a table that just has the time column
new_table=AsciiDataTable(None,column_types=["float"],column_names=["Time"],data=map(lambda x: [x],time_list.tolist()))
In [85]:
# we can add columns
new_table.add_column(column_name="Sine_Model",column_data=sine_wave(time_list).tolist(),column_type="float")
In [86]:
# ideal data
multisine.set_parameters({"A_1":1.,"A_2":.3,"A_3":.5,"phi_1":0,"phi_2":np.pi/2.,"phi_3":0})
new_table.add_column(column_name="Multisine_Model",column_data=multisine(time_list).tolist(),column_type="float")
# data with noise
new_table.add_column(column_name="Synthetic_Data",column_data=sythetic_data.data.tolist(),column_type="float")
#fit of data with noise
multisine.fit_data(time_list,sythetic_data.data)
new_table.add_column(column_name="Synthetic_Fit",column_data=multisine(time_list).tolist(),column_type="float")
In [87]:
# now we have built the table with
new_table.column_names
Out[87]:
['Time', 'Sine_Model', 'Multisine_Model', 'Synthetic_Data', 'Synthetic_Fit']
In [88]:
new_table.save("./pyMez_Tour_Files/Fitting_Table.txt")
In [89]:
help(interpolate_table)
Help on function interpolate_table in module pyMez.Code.Analysis.Interpolation:

interpolate_table(table, independent_variable_list)
    Returns a copy of the table interpolated to the independent variable list
    Assumes there is a single independent variable in the first column

In [90]:
# now if we want to interpolate we can just use interpolate_table
new_time_list=np.linspace(1,2,1000).tolist()
interpolated_new_table=interpolate_table(new_table,new_time_list)
In [91]:
plt.plot(time_list,sythetic_data.data,label="Original Data")
plt.plot(interpolated_new_table["Time"],interpolated_new_table["Synthetic_Data"],label="Interpolated Data")
plt.legend()
plt.show()


Uncertainty

In the Analysis subpackage there are classes and functions to create and compare uncertainties. They are contained in pyMez.Code.Analysis.Uncertainty, pyMez.Code.Analysis.SParameter, and pyMez.Code.Analysis.NISTUncertainty. In addition, there are several data types in the DataHandlers subpackage that deal with the output of error calculators such as StatistiCAL, and the Microwave Uncertainty Framework.

In [92]:
# these modules are not in the Base API so you need to load them separately.
from pyMez.Code.Analysis.Uncertainty import *
from pyMez.Code.Analysis.SParameter import *
In [93]:
# Now if you have two tables at least one with uncertainty we can create a standard error table
help(standard_error_data_table)
Help on function standard_error_data_table in module pyMez.Code.Analysis.Uncertainty:

standard_error_data_table(table_1, table_2, **options)
    standard error data table takes two tables and creates a table that is the standard error of the two tables,
    at least one table must have uncertainties associated with it. The input tables are assumed to have data
    in the form [[x, y1, y2,...]..] Uncertainties can be specified as a column name in the respective
    table, fractional, constant, or a function of the values. The returned table is an object
    of the class StandardErrorModel(AsciiDataTable) that has data in the form
    [[independent_varaible,SEValue1,SEValue2...]...] where column names are formed by
    appending SE to the value column names. To plot the table use result.show()

In [94]:
# The standard error table has a lot of possibilities, any set of column names can be used as the values 
# and the errors can be a function, a percentage, a constant or a table. In addition, the error for table 2 can be specified
# or unspecified. In addtion the resulting table only has values at the independent_variable locations for table_1
# as an example we can take a raw file scattering parameter measurement from cal services, calculate the uncertainty using
# the calrep program, and then compare it to a results file
raw_scattering_parameters=TwoPortRawModel("./pyMez_Tour_Files/CTN206.A35_092805")
raw_scattering_parameters.column_names
Out[94]:
['Frequency',
 'Direction',
 'Connect',
 'magS11',
 'argS11',
 'magS21',
 'argS21',
 'magS22',
 'argS22']
In [95]:
# Now we can estimate the errors using the calrep program. It creates a series error estimates based on 
# the six port error analysis
calrep_scattering_parameters=calrep(raw_scattering_parameters)
calrep_scattering_parameters.column_names
Out[95]:
['Frequency',
 'magS11',
 'uMbS11',
 'uMaS11',
 'uMdS11',
 'uMgS11',
 'argS11',
 'uAbS11',
 'uAaS11',
 'uAdS11',
 'uAgS11',
 'magS21',
 'uMbS21',
 'uMaS21',
 'uMdS21',
 'uMgS21',
 'argS21',
 'uAbS21',
 'uAaS21',
 'uAdS21',
 'uAgS21',
 'magS22',
 'uMbS22',
 'uMaS22',
 'uMdS22',
 'uMgS22',
 'argS22',
 'uAbS22',
 'uAaS22',
 'uAdS22',
 'uAgS22']
In [96]:
# Now we can load a file that was created as a mean of good measurements
mean_scattering_parameters=ResultFileModel("./pyMez_Tour_Files/CTN206.Results")
mean_scattering_parameters.column_names
Out[96]:
['Device_Id',
 'Frequency',
 'Number_Measurements',
 'magS11',
 'argS11',
 'magS21',
 'argS21',
 'magS22',
 'argS22']
In [97]:
# now we can specify all of the options. The standard error data table can handle any two AsciiDataTable descendants
error_options={"independent_variable_column_name":"Frequency",
              "value_column_names":['magS11','argS11','magS21',
                                            'argS21','magS22','argS22'],
              "table_1_uncertainty_column_names":['uMgS11','uAgS11',
                                                  'uMgS21','uAgS21','uMgS22','uAgS22'],
              "table_2_uncertainty_column_names":['uMgS11','uAgS11',
                                                  'uMgS21','uAgS21','uMgS22','uAgS22'],
               "uncertainty_table_1":None,
               "uncertainty_table_2":None,
               "uncertainty_function_table_1":None,
               "uncertainty_function_table_2":None,
               "uncertainty_function":None,
               "uncertainty_type":None,
               "table_1_uncertainty_type":"table",
               "table_2_uncertainty_type":None,
               "expansion_factor":1,
               'debug':False}
standard_error_scattering_parameters=standard_error_data_table(calrep_scattering_parameters,
                                                               mean_scattering_parameters,**error_options)
In [98]:
# now we can save, plot and use the standard error table
standard_error_scattering_parameters.column_names
Out[98]:
['Frequency',
 'SEmagS11',
 'SEargS11',
 'SEmagS21',
 'SEargS21',
 'SEmagS22',
 'SEargS22']
In [99]:
standard_error_scattering_parameters.save("./pyMez_Tour_Files/Standard_Error.txt")
In [100]:
standard_error_scattering_parameters.show();
In [101]:
# we can use a special function to look at the calrep and results comparison
plot_calrep_results_comparison(calrep_model=calrep_scattering_parameters,results_model=mean_scattering_parameters);
plot_calrep_results_difference_comparison(calrep_model=calrep_scattering_parameters,results_model=mean_scattering_parameters);


Scattering Parameters

There are many functions based on the manipulation of scattering parameter and wave parameter data normally taken using a vector network analyzer. The functions and classes that deal with scattering parameters reside in several modules. In the base API there is pyMez.Code.DataHandlers.TouchstoneModels for dealing with snp style files. The add on modules pyMez.Code.DataHandlers.StatistiCALModels and pyMez.Code.DataHandlers.MUFModels have classes and functions that interact with vna calibration software from NIST. The modules pyMez.Code.DataHandlers.Translations and pyMez.Code.Analysis.Transformations have conversion functions to transform scattering parameters to other data types and wave parameters to scattering parameters. Finally the module pyMez.Code.Analysis.SParameters has functions for analyzing frequency dependent data, calculating uncertainties, applying corrections and plotting.

In [102]:
# Lets open three connects (measurements) of a single device
connect_1=SNP(r"./pyMez_Tour_Files/Line_4909_WR15_20180313_001.s2p")
connect_2=SNP(r"./pyMez_Tour_Files/Line_4909_WR15_20180313_002.s2p")
connect_3=SNP(r"./pyMez_Tour_Files/Line_4909_WR15_20180313_003.s2p")
In [103]:
# now if we want to plot them together we can use 
compare_s2p_plots([connect_1,connect_2,connect_3]);
In [104]:
# we can calculate the mean of the three in multiple ways. If we convert to AsciiDataTables we can add the files
# and then use frequency_model_collapse_multiple_measurements
first_file=Snp_to_AsciiDataTable(connect_1)
joined_data_table=first_file.copy()
for connect in [connect_2,connect_2]:
    joined_data_table=joined_data_table+Snp_to_AsciiDataTable(connect)
mean_table=frequency_model_collapse_multiple_measurements(joined_data_table)
std_table=frequency_model_collapse_multiple_measurements(joined_data_table,method="STD")
In [105]:
# now we can plot the tables using the general function plot_frequency_model
plot_frequency_model(mean_table,plot_format="r-");
In [106]:
plot_frequency_model(std_table,plot_format="r-");
In [107]:
# we can also correct the data given a sixteen term correction in s4p format
correction=SNP("./pyMez_Tour_Files/Solution_WR15.s4p")
In [108]:
# The correction is made to the complex data so that it is essentially format free (just a list of lists of complex numbers)
corrected_complex_data=correct_sparameters_sixteen_term(sixteen_term_correction=correction.sparameter_complex,
                                                        sparameters_complex=connect_1.sparameter_complex)
# we can use the S2PV1 model to encapsulate the data
corrected_connect_1=S2PV1(sparameter_complex=corrected_complex_data)
corrected_connect_1.show();
In [109]:
# to check we can uncorrect the s2p 

uncorrected_complex_data=uncorrect_sparameters_sixteen_term(sixteen_term_correction=correction.sparameter_complex,
                                                        sparameters_complex=corrected_connect_1.sparameter_complex)
# we can use the S2PV1 model to encapsulate the data
uncorrected_connect_1=S2PV1(sparameter_complex=uncorrected_complex_data,column_types=["float" for i in range(9)])
uncorrected_connect_1.show();


Other Analysis

pyMez has several other analytic modules and will be extended with time. For instance pyMez.Code.Analysis.ProgramAnalysis provides tools for creating an SVG example of a function that links a text form of input, a copy of the code, and a form of the output to an SVG diagram.


Overview

Instrument Control in pyMez is primarily contained in the module pyMez.Code.InstrumentControl.Instruments. The subpackage pyMez.Code.InstrumentControl is meant to expand overtime, as more instrument drivers are added. The module pyMez.Code.InstrumentControl.Experiments houses the combination of several instruments and data management. If an experiment makes it to the place that it is going to be replicated several times, then the experiment class can be bound to a GUI.

Behavior of Instruments

There are a few issues that the package pyvisa did not address adequately for our lab. The Instrument class (VisaInstrument) now has several capabilities:

  1. The ability to link to a static data document in xml that describes the instrument (InstrumentSheet)
  2. The ability to save the state of the instrument with a flexible number of state commands included.(InstrumentState)
  3. The ability to load that state at a later time.
  4. A diagnostic mode (emulation_mode) that allows for prototyping even when an instrument or bus is not present

Basics of the Instruments module

For the most part we deal with instruments using VISA (Using the class VisaInstrument). The main interface has some common features

  • Upon creation the instrument_sheet_directory is , for the InstrumentSheet if it exists the data in it is added to the class.
  • All instruments have write, read and query commands, in addition the resource attribute holds the visa implementation so any commands such as write_binary are available as instrument.resource.write_binary
  • All instruments have .get_state, .set_state, .save_state and .load_state methods that can sequentially write a series of commands to query or set a state from a variable or file
  • Instruments that are children of the VisaInstrument class normally have methods `.initialize_{measurement}` and `.measure_{measurement}`


Instrument Sheets

To provide information regarding an instrument, including the default state commands, we have developed instrument sheets. Essentially they are xml documents that provide a series of information. The instrument sheet data handler class lives in pyMez.Code.DataHandlers.XMLModels and is a parent of the VisaInstrument class. If a call to an instrument does not return an instrument from the instrument_description_directory passed in creation of the class it behaves as if the description is empty. The creation of VisaInstrument looks for any unique string that identifies the instrument and then deduces the gpib address from the instrument sheet. The sheet can be extended to include any information that the user needs or wants.

An Example Instrument Sheet

<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="DEFAULT_INSTRUMENT_STYLE.xsl"?>
<!-- Written by Aric Sanders 04/2017 -->


<Instrument>
<!-- Information Specific To My Power Meter-->
    <Specific_Information>
        <Name>NRPPowerMeter</Name>
        <Alias>RS_NRP_Power_Meter_4639_01</Alias>
        <Location>Building 1, 4639</Location>
        <Manual href="../Documentation/Manuals/RS_NRP2_Manual.pdf"/>
        <Image href="./RS_NRP_Power_Meter_4639_01_Images/NRP.jpg"/>
        <Price></Price>
        <Serial></Serial>
        <IDN>Rohde&amp;Schwarz,NRP,102508,06.01</IDN>
        <Instrument_Type>GPIB</Instrument_Type>
        <Instrument_Address>GPIB::14</Instrument_Address>
        <Purchase_Date></Purchase_Date>
        <NIST_Tag>935166</NIST_Tag>
    </Specific_Information>
<!-- Information Common To All NRP Power meters-->
    <General_Information>
    <Manufacturer>Rohde and Schwarz</Manufacturer>
    <Manufacturer_Website href="https://www.rohde-schwarz.com/us/home_48230.html" />
     <Commands_Description>
        <Command>This is the command sent over the GPIB bus.</Command>
        <Type>Whether or not it returns a value or just sets something.</Type>
        <Argument>The parameter the command passes to the instrument. Optional parameters
        are denoted with an *. The types of paramters are int=integer, float=floating point number,
        string=string, and None=NULL.</Argument>
        <Returns> What gets returned by the function. </Returns>
        <Description>A one line describing the purpose of the function, for more detailed info look in the manual. </Description>
    </Commands_Description>
    <Commands>

    </Commands>
    <Command_Parameter_Definitions>

    </Command_Parameter_Definitions>
    <State_Commands>
        <Tuple Set="SENS:FUNC" Query="SENS:FUNC?"/>
        <Tuple Set="UNIT:POW" Query="UNIT:POW?"/>
    </State_Commands>
    </General_Information>

</Instrument>
In [110]:
# for instance the NRPPowerMeter is a Rohde and Schwartz power meter
power_meter=VisaInstrument("NRPPowerMeter")
Unable to load resource entering emulation mode ...
In [111]:
# Since the address was a unique string in the xml data sheet it finds all the information in that sheet and loads it 
# into attributes that mirror the tag names (all lower case)
power_meter.idn
Out[111]:
'Rohde&Schwarz,NRP,102508,06.01'
In [112]:
# this lets the user define information that is domain or user specific that is added to the control class on creation
power_meter.nist_tag
Out[112]:
'935166'
In [113]:
# it also allows the user to define "State_Commands" in the general description that creates a 
# default state query dictionary constant for the instrument, this can be changed after creation
power_meter.DEFAULT_STATE_QUERY_DICTIONARY
Out[113]:
{'SENS:FUNC': 'SENS:FUNC?', 'UNIT:POW': 'UNIT:POW?'}
In [114]:
# this controls the behaivor of the .get_state and save_state method when called without a state dictionary or table
power_meter.get_state()
Out[114]:
{'SENS:FUNC': 'Buffer Read at 2019-01-25T00:43:26.697000',
 'UNIT:POW': 'Buffer Read at 2019-01-25T00:43:26.696000'}
In [115]:
# In addition the instrument_sheet has a XSL that transforms it to html
html_instrument_sheet=HTMLBase(html_text=power_meter.to_HTML(os.path.join(TESTS_DIRECTORY,"../XSL/DEFAULT_INSTRUMENT_STYLE.xsl")))
html_instrument_sheet.show()
file://c:/users/sandersa/appdata/local/temp/1/tmp84mlat.html


Instrument States

A major thrust of pyMez is to provide dynamic metadata for instruments, that is the ability to store the state of an instrument at a given moment in a tabular or xml form and then recall it. For the xml version of states we use classes found in pyMez.Code.DataHandlers.XMLModels which is part of the Base API. This is the default style, however it is easy to convert to a regular AsciiDataTable or similar. When dealing with states it is important to realize that a state dictionary without an index will be written to the instrument in random order, if order is important a state table with an index is available. The default state behavior is determined by the instrument sheet if one is found. The VisaInstrument class will save the states for instruments in the state_directory specified at creation or default to the current working directory.

In [116]:
# first we create an instrument in this case we will use the base class VisaInstrument but any descendant will work
# In our instrument_sheet_directory we have a vna with GPIB address 16
# we can also set the state_directory at the time of creation or later
vna=VisaInstrument("GPIB::16")
Unable to load resource entering emulation mode ...
In [117]:
# now we can save the current state to the desired location
# if you do not specify a location it will auto name it 
vna.save_state(state_path="./pyMez_Tour_Files/Right_Now_State.xml")
Out[117]:
'./pyMez_Tour_Files/Right_Now_State.xml'
In [118]:
# now the state is an xml file containing the set commands and the values
# these have been read from the InstrumentSheet, but can be provided
xml_state=InstrumentState("./pyMez_Tour_Files/Right_Now_State.xml")
In [119]:
xml_state.document.getElementsByTagName('State_Description')[0]
Out[119]:
<DOM Element: State_Description at 0x39721208>
In [120]:
print(xml_state)
<Instrument_State><State><Tuple Set="SENS:AVER" Value="Buffer Read at 2019-01-25T00:43:26.883000"/><Tuple Set="SENS:BAND" Value="Buffer Read at 2019-01-25T00:43:26.884000"/><Tuple Set="SOUR:POW" Value="Buffer Read at 2019-01-25T00:43:26.887000"/><Tuple Set="SENS:SWE:TYPE" Value="Buffer Read at 2019-01-25T00:43:26.885000"/><Tuple Set="SOUR:POW:CORR:STAT" Value="Buffer Read at 2019-01-25T00:43:26.888000"/><Tuple Set="SOUR:POW:SLOP" Value="Buffer Read at 2019-01-25T00:43:26.889000"/><Tuple Set="SENS:CORR:STAT" Value="Buffer Read at 2019-01-25T00:43:26.886000"/></State><State_Description><State_Timestamp>2019-01-25T00:43:26.892000</State_Timestamp><Instrument_Description>C:\ProgramData\Anaconda2\lib\site-packages\pyMez\Code\InstrumentControl\..\..\Instruments\E8361A_PNA_01.xml</Instrument_Description><State_Timestamp>2019-01-25T00:43:26.955000</State_Timestamp></State_Description><State/><State_Description/></Instrument_State>
<Instrument_State>
    <State>
        <Tuple Set="SENS:AVER" Value="Buffer Read at 2018-12-10T04:59:22.274000"/>
        <Tuple Set="SENS:BAND" Value="Buffer Read at 2018-12-10T04:59:22.275000"/>
        <Tuple Set="SOUR:POW" Value="Buffer Read at 2018-12-10T04:59:22.278000"/>
        <Tuple Set="SENS:SWE:TYPE" Value="Buffer Read at 2018-12-10T04:59:22.276000"/>
        <Tuple Set="SOUR:POW:CORR:STAT" Value="Buffer Read at 2018-12-10T04:59:22.279000"/>
        <Tuple Set="SOUR:POW:SLOP" Value="Buffer Read at 2018-12-10T04:59:22.280000"/>
        <Tuple Set="SENS:CORR:STAT" Value="Buffer Read at 2018-12-10T04:59:22.277000"/>
    </State>
    <State_Description>
        <State_Timestamp>2018-12-10T04:59:22.288000</State_Timestamp>
        <Instrument_Description>C:\ProgramData\Anaconda2\lib\site-packages\pyMez\Code\InstrumentControl\..\..\Instruments\E8361A_PNA_01.xml</Instrument_Description>
        <State_Timestamp>2018-12-10T05:00:58.017000</State_Timestamp>
    </State_Description>
</Instrument_State>
In [121]:
# again this xml data can be transformed to html using a style sheet
html_state=HTMLBase(html_text=xml_state.to_HTML(os.path.join(TESTS_DIRECTORY,"../XSL/DEFAULT_STATE_STYLE.xsl")))
html_state.show()
file://c:/users/sandersa/appdata/local/temp/1/tmpmmyui7.html
In [122]:
# if you create a series of GPIB commands you can index them so that they are written in order
state_table=[{"Index":0,"Set":"SOUR:POW:SLOP","Query":"SOUR:POW:SLOP?"},
             {"Index":1,"Set":"SOUR:POW","Query":"SOUR:POW?"}]
vna.save_state(state_path="./pyMez_Tour_Files/Reduced_Right_Now_State.xml",state_table=state_table)
Out[122]:
'./pyMez_Tour_Files/Reduced_Right_Now_State.xml'
In [123]:
xml_reduced_state=InstrumentState("./pyMez_Tour_Files/Reduced_Right_Now_State.xml")
print(xml_reduced_state)
<Instrument_State><State><Tuple Index="0" Query="SOUR:POW:SLOP?" Set="SOUR:POW:SLOP"/><Tuple Index="1" Query="SOUR:POW?" Set="SOUR:POW"/></State><State_Description><State_Timestamp>2019-01-25T00:43:27.230000</State_Timestamp><Instrument_Description>C:\ProgramData\Anaconda2\lib\site-packages\pyMez\Code\InstrumentControl\..\..\Instruments\E8361A_PNA_01.xml</Instrument_Description><State_Timestamp>2019-01-25T00:43:27.288000</State_Timestamp></State_Description><State/><State_Description/></Instrument_State>


Instruments

Instruments are typically a descendant of the VisaInstrument class and follow these design rules:

  1. The addition of simple functions to read and write parameters to an instrument are in the set_parameter, get_parameter style. This is to denote a distinction between class attributes and communication to the instrument.
  2. Complex measurements that return multiple values that have been parsed are in the .measure_quantity style. If these measurements require initialization then a method .initialize_quantity should added.
In [124]:
# Example of creating a new instrument class
class MyInstrument(VisaInstrument):
    def get_frequency(self):
        "Gets the instruments frequency"
        frequency=self.query("Command_To_Get_Frequency")
        return frequency
    def set_frequency(self,frequency):
        "Sets the instruments frequency"
        self.write("Command_To_Set_Frequency {0}".format(frequency))
In [125]:
instrument=MyInstrument("FakeAddress")
instrument.get_frequency()
The information sheet was not found defaulting to address
Unable to load resource entering emulation mode ...
The information sheet was not found defaulting to address
Out[125]:
'Buffer Read at 2019-01-25T00:43:27.628000'
In [126]:
instrument.set_frequency(1000)
In [127]:
instrument.history
Out[127]:
[{'Action': 'self.write',
  'Argument': 'Command_To_Get_Frequency',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:27.628000'},
 {'Action': 'self.read',
  'Argument': None,
  'Response': 'Buffer Read at 2019-01-25T00:43:27.628000',
  'Timestamp': '2019-01-25T00:43:27.628000'},
 {'Action': 'self.write',
  'Argument': 'Command_To_Set_Frequency 1000',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:27.654000'}]
In [128]:
# currently there are classes for VNA operation, HighSpeedOscilloscope, Picoammeter-Source, and Power Meters
help(VNA)
Help on class VNA in module pyMez.Code.InstrumentControl.Instruments:

class VNA(VisaInstrument)
 |  Control class for a linear VNA.
 |  The .measure_sparameters ans .measure_switch_terms return a S2PV1
 |  class that can be saved, printed or have a simple plot using show(). The attribute frequency_list
 |  stores the frequency points as Hz.
 |  
 |  Method resolution order:
 |      VNA
 |      VisaInstrument
 |      Code.DataHandlers.XMLModels.InstrumentSheet
 |      Code.DataHandlers.XMLModels.XMLBase
 |  
 |  Methods defined here:
 |  
 |  __init__(self, resource_name=None, **options)
 |      Initializes the E8631A control class
 |  
 |  add_all_traces(self, **options)
 |      Adds all Sparameter and wave parameter traces.
 |      Does not initialize the instrument. The trace names match those in the
 |      measure methods (S11,S12,..S22) and (A1_D1,B1_D1..B2_D2) by default it
 |      assumes port 1 and port 2 are being used. In addition, it assumes the B receiver names are [A,B,C,D].
 |       This method will cause an error if the traces are already defined
 |  
 |  add_segment(self, start, stop=None, number_points=None, step=None, frequency_units='Hz')
 |      Sets the VNA to a segment mode and appends a single entry in the frequency table. If start is the only specified
 |      parameter sets the entry to start=stop and number_points = 1. If step is specified calculates the number of points
 |      and sets start, stop, number_points on the VNA. It also stores the value into the attribute frequency_list.
 |      Note this function was primarily tested on an agilent which stores frequency to the nearest mHz.
 |  
 |  add_trace(self, trace_name, trace_parameter, drive_port=1, display_trace=True)
 |      Adds a single trace to the VNA. Trace parameters vary by instrument and can be ratios of
 |      recievers or raw receiver values. For instance, R1 is the a1 wave. Traditional Sparameters
 |      do not require the identification of a drive_port. Does not display trace on the front panel
 |  
 |  clear_window(self, window=1)
 |      Clears the  window of traces. Does not delete the variables
 |  
 |  get_IFBW(self)
 |      Returns the IFBW of the instrument in Hz
 |  
 |  get_frequency(self)
 |      Returns the frequency in python list format
 |  
 |  get_frequency_list(self)
 |      Returns the frequency list as read from the VNA
 |  
 |  get_power(self)
 |      Returns the power of the instrument in dbm
 |  
 |  get_source_output(self)
 |      Returns the state of the outputs. This is equivelent to vna.query('OUTP?')
 |  
 |  get_sweep_type(self)
 |      Returns the current sweep type. It can be LIN, LOG, or SEG
 |  
 |  initialize(self, **options)
 |      Intializes the system
 |  
 |  initialize_w1p(self, **options)
 |      Initializes the system for w1p acquisition, default works for ZVA
 |  
 |  initialize_w2p(self, **options)
 |      Initializes the system for w2p acquisition
 |  
 |  is_busy(self)
 |      Checks if the instrument is currently doing something and returns a boolean value
 |  
 |  measure_sparameters = return_data(self, *args, **kwargs)
 |  
 |  measure_switch_terms(self, **options)
 |      Measures switch terms and returns a s2p table in forward and reverse format. To return in port format
 |      set the option order= "PORT
 |  
 |  measure_w1p(self, **options)
 |      Triggers a single w1p measurement for a specified
 |      port and returns a w1p object.
 |  
 |  measure_w2p(self, **options)
 |      Triggers a single w2p measurement for a specified
 |      port and returns a w2p object.
 |  
 |  read_trace(self, trace_name)
 |      Returns a 2-d list of [[reParameter1,imParameter1],..[reParameterN,imParameterN]] where
 |      n is the number of points in the sweep. User is responsible for triggering the sweep and retrieving
 |       the frequency array vna.get_frequency_list()
 |  
 |  remove_all_segments(self)
 |      Removes all segments from VNA
 |  
 |  remove_segment(self, segment=1)
 |      Removes a the segment, default is segment 1
 |  
 |  set_IFBW(self, ifbw)
 |      Sets the IF Bandwidth of the instrument in Hz
 |  
 |  set_frequency(self, start, stop=None, number_points=None, step=None, type='LIN', frequency_units='Hz')
 |      Sets the VNA to a linear mode and creates a single entry in the frequency table. If start is the only specified
 |      parameter sets the entry to start=stop and number_points = 1. If step is specified calculates the number of points
 |      and sets start, stop, number_points on the VNA. It also stores the value into the attribute frequency_list.
 |      Note this function was primarily tested on an agilent which stores frequency to the nearest mHz.
 |  
 |  set_frequency_units(self, frequency_units='Hz')
 |      Sets the frequency units of the class, all values are still written to the VNA
 |      as Hz and the attrbiute frequncy_list is in Hz,
 |      however all commands that deal with sweeps and measurements will be in units
 |  
 |  set_power(self, power)
 |      Sets the power of the Instrument in dbm
 |  
 |  set_source_output(self, state=0)
 |      Sets all of the outputs of the VNA to OFF(0) or ON (1). This disables/enables all the source outputs.
 |  
 |  trigger_sweep(self)
 |      Triggers a single sweep of the VNA, note you need to wait for the sweep to finish before reading the
 |      values. It takes ~ #ports sourced*#points/IFBW
 |  
 |  write_frequency_table(self, frequency_table=None)
 |      Writes frequency_table to the instrument, the frequency table should be in the form
 |      [{start:,stop:,number_points:}..] or None
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from VisaInstrument:
 |  
 |  ask(self, command)
 |      Writes command and then reads a response
 |  
 |  close(self)
 |      Closes the VISA session
 |  
 |  get_state(self, state_query_dictionary=None, state_query_table=None)
 |      Gets the current state of the instrument. get_state accepts any query dictionary in
 |      the form state_query_dictionary={"GPIB_SET_COMMAND":"GPIB_QUERY_COMMAND",...} or any state_query_table
 |      in the form [{"Set":"GPIB_SET_COMMAND","Query":"GPIB_QUERY_COMMAND","Index":Optional_int_ordering commands,
 |      if no state is provided it returns the DEFAULT_STATE_QUERY_DICTIONARY as read in from the InstrumentSheet
 |  
 |  load_state(self, file_path)
 |      Loads a state from a file.
 |  
 |  query(self, command)
 |      Writes command and then reads a response
 |  
 |  read(self)
 |      Reads from the instrument
 |  
 |  save_current_state(self)
 |      Saves the state in self.current_state attribute
 |  
 |  save_state(self, state_path=None, state_dictionary=None, state_table=None, refresh_state=False)
 |      Saves any state dictionary to an xml file, with state_path,
 |      if not specified defaults to autonamed state and the default state dictionary refreshed at the time
 |      of the method call. If refresh_state=True it gets the state at the time of call otherwise
 |      the state is assumed to be all ready complete. state=instrument.get_state(state_dictionary) and then
 |      instrument.save_state(state). Or instrument.save_state(state_dictionary=state_dictionary,refresh=True)
 |  
 |  set_state(self, state_dictionary=None, state_table=None)
 |      Sets the instrument to the state specified by state_dictionary={Command:Value,..} pairs, or a list of dictionaries
 |      of the form state_table=[{"Set":Command,"Value":Value},..]
 |  
 |  update_current_state(self)
 |  
 |  write(self, command)
 |      Writes command to instrument
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from Code.DataHandlers.XMLModels.InstrumentSheet:
 |  
 |  add_entry(self, tag_name, text=None, description='Specific', **attribute_dictionary)
 |      Adds an entry to the instrument sheet.
 |  
 |  get_image_path(self)
 |      Tries to return the image path, requires image to be in
 |      <Image href="http://132.163.53.152:8080/home_media/img/Fischione_1040.jpg"/> format
 |  
 |  get_query_dictionary(self)
 |      Returns a set:query dictionary if there is a State_Commands element
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from Code.DataHandlers.XMLModels.XMLBase:
 |  
 |  __getitem__(self, item)
 |      This returns the items found by using xpath as a string.
 |      For example: XMLBase[".//BeforeCalibration/Item/SubItem[@Index='6']"] will return all of the elements
 |      with index=6. This is a thin wrapper of etree.findall
 |  
 |  __str__(self)
 |      Controls how XMLBAse is returned when a string function is called. Changed to using self.etree instead
 |      of self.document for better unicode support
 |  
 |  save(self, path=None)
 |      " Saves as an XML file
 |  
 |  save_HTML(self, file_path=None, XSLT=None)
 |      Saves a HTML transformation of the XML document using XLST at file_path. Defaults to
 |      an XLST in self.options["XSLT"] and file_path=self.path.replace('.xml','.html')
 |  
 |  show(self, mode='Window')
 |      Displays a XML Document either as formatted text in the command line or in a
 |      window (using wx)
 |  
 |  to_HTML(self, XSLT=None)
 |      Returns HTML string by applying a XSL to the XML document
 |  
 |  update_document(self)
 |      Updates the attribute document from the self.etree.
 |  
 |  update_etree(self)
 |      Updates the attribute etree. Should be called anytime the xml content is changed


Emulated Instruments

When you call an instrument and it cannot be loaded because the GPIB bus is not connected or there is not an instrument at the specified address the instrument class enters a emulation mode where the commands sent to the instrument are recorded.This sets the attribute instrument.emulation_mode=True, which is useful in debugging commands and performance.

In [129]:
emulated_instrument_with_sheet=VisaInstrument("GPIB::16")
emulated_instrument=VisaInstrument("GPIB::20000")
Unable to load resource entering emulation mode ...
The information sheet was not found defaulting to address
Unable to load resource entering emulation mode ...
The information sheet was not found defaulting to address
In [130]:
# now if the instrument sheet is found there is information about the instrument from the xml
emulated_instrument_with_sheet.idn
Out[130]:
'Agilent Technologies,E8361A,US43140754,A.07.50.67'
In [131]:
emulated_instrument_with_sheet.commands
Out[131]:
[u'SENS:BAND',
 u'SENS:BAND?',
 u'SENS:AVER',
 u'SENS:AVER?',
 u'SOUR:POW',
 u'SOUR:POW?',
 u'SOUR:POW:SLOP',
 u'SOUR:POW:SLOP?',
 u'SOUR:POW:CORR:STAT',
 u'SOUR:POW:CORR:STAT?',
 u'SENS:CORR:STAT',
 u'SENS:CORR:STAT?',
 u'SENS:SWE:TYPE',
 u'SENS:SWE:TYPE?']
In [132]:
# if there is not an instrument sheet found in the instrument_description_directory then it functions the same without
# the attributes .idn, commands defined
emulated_instrument.commands
Out[132]:
[]
In [133]:
# now if we write to the bus
emulated_instrument.write("SOUR:POW 5")
In [134]:
# we can see the history here
emulated_instrument.history
Out[134]:
[{'Action': 'self.write',
  'Argument': 'SOUR:POW 5',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.088000'}]
In [135]:
# reads to the bus look like this
emulated_instrument.read()
Out[135]:
'Buffer Read at 2019-01-25T00:43:28.151000'
In [136]:
# Now we can see the read results, query or ask commands perform a write then a read
emulated_instrument.history
Out[136]:
[{'Action': 'self.write',
  'Argument': 'SOUR:POW 5',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.088000'},
 {'Action': 'self.read',
  'Argument': None,
  'Response': 'Buffer Read at 2019-01-25T00:43:28.151000',
  'Timestamp': '2019-01-25T00:43:28.151000'}]
In [137]:
# it also sets the emulation_mode to true
emulated_instrument.emulation_mode
Out[137]:
True
In [138]:
# now to see how the set_state command works 
state_to_set=[{"Index":0,"Set":"GPIB Set Command 1","Query":"Read Command 1","Value":2},
             {"Index":1,"Set":"GPIB Set Command 2","Query":"Read Command 2","Value":"MyValue2"}]
In [139]:
# we are using the table form instead of the dictionary form
emulated_instrument.set_state(state_table=state_to_set)
In [140]:
# Now we can see that the Set Command has the Value inserted into it
emulated_instrument.history
Out[140]:
[{'Action': 'self.write',
  'Argument': 'SOUR:POW 5',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.088000'},
 {'Action': 'self.read',
  'Argument': None,
  'Response': 'Buffer Read at 2019-01-25T00:43:28.151000',
  'Timestamp': '2019-01-25T00:43:28.151000'},
 {'Action': 'self.write',
  'Argument': 'GPIB Set Command 1 2',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.231000'},
 {'Action': 'self.write',
  'Argument': 'GPIB Set Command 2 MyValue2',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.231000'}]
In [141]:
# the get state command returns a "Set" and a "Value" from the instrument
emulated_instrument.get_state(state_query_table=state_to_set)
Out[141]:
[{'Index': 0,
  'Set': 'GPIB Set Command 1',
  'Value': 'Buffer Read at 2019-01-25T00:43:28.262000'},
 {'Index': 1,
  'Set': 'GPIB Set Command 2',
  'Value': 'Buffer Read at 2019-01-25T00:43:28.263000'}]
In [142]:
emulated_instrument.history
Out[142]:
[{'Action': 'self.write',
  'Argument': 'SOUR:POW 5',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.088000'},
 {'Action': 'self.read',
  'Argument': None,
  'Response': 'Buffer Read at 2019-01-25T00:43:28.151000',
  'Timestamp': '2019-01-25T00:43:28.151000'},
 {'Action': 'self.write',
  'Argument': 'GPIB Set Command 1 2',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.231000'},
 {'Action': 'self.write',
  'Argument': 'GPIB Set Command 2 MyValue2',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.231000'},
 {'Action': 'self.write',
  'Argument': 'Read Command 1',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.262000'},
 {'Action': 'self.read',
  'Argument': None,
  'Response': 'Buffer Read at 2019-01-25T00:43:28.262000',
  'Timestamp': '2019-01-25T00:43:28.262000'},
 {'Action': 'self.write',
  'Argument': 'Read Command 2',
  'Response': None,
  'Timestamp': '2019-01-25T00:43:28.263000'},
 {'Action': 'self.read',
  'Argument': None,
  'Response': 'Buffer Read at 2019-01-25T00:43:28.263000',
  'Timestamp': '2019-01-25T00:43:28.263000'}]


Utilities Overview

The subpackage pyMez.Code.Utils is designed to be a place to store functions and classes that are helpful to each of the other subpackages. For instance, the functions that auto-generate help files, the auto-naming function and timing decorators all reside in this package.


Help Utilities

The subpackage pyMez.Code.Utils.HelpUtils contains the functions used to auto-generate the pyMez Documentation. This documentation is multi-tiered and contains an API help based on html produced by the pdoc package available on pip. This package reads a python package and using introspection generates html. It was chosen for its simplicity and the linking of source code. The pdoc package uses a custom template located in pyMez/Documentation/templates. In addition to the API help, a list of all the functions and classes organized by sub-package is generated using the create_index_html_script. It reads the code in a package and finds any pattern that matches a class or function then links to the html help created by pdoc. Finally this example page and the main page of the documentation is a jupyter notebook that has been converted to html using nbconvert. The create_examples_html_script works by reading all notebook files in pyMez/Documentation/Examples/jupyter and converting them to a similar file structure in pyMez/Documentation/Examples/html. These files then have the .ipynb postfix changed to .html in the links using change_links_examples_script. This requires manual editing of the Examples_Home notebook to include links to the jupyter examples.

In [143]:
# we can get html help for any live object using return_help. This returns help  for the full module. 
from pyMez.Code.Utils.HelpUtils import *
html_help=HTMLBase(html_text=str(return_help(SNP)))
html_help.show()
file://c:/users/sandersa/appdata/local/temp/1/tmpo5ckvu.html


Naming and Alias

The subpackage pyMez.Code.Utils.Names contains the functions used to auto-generate names and pyMez.Code.Utils.Aliascontains functions for creating method aliases. Auto-naming follows the template {specific_descriptor}_{general_descriptor}_{isodate}_{iterator}.{extension} where the iterator is decided on by the files all ready present in directory.

In [144]:
# for example
from pyMez.Code.Utils.Names import *
auto_name(directory="./pyMez_Tour_Files",specific_descriptor="Scope",general_descriptor="Measurement",extension="dat")
Out[144]:
'Scope_Measurement_20190124_001.dat'
In [145]:
auto_name(directory="./pyMez_Tour_Files",specific_descriptor="Scope",general_descriptor="Measurement",extension="txt")
Out[145]:
'Scope_Measurement_20190124_001.txt'
In [146]:
# the module Alias is used to create Aliases for methods 
from pyMez.Code.Utils.Alias import *
In [147]:
class MyClass(object):
    def __init__(self):
        self.littleAttribue=[]
        self.i_like_underscores=[]
        # this calls and executes the alias function
        for command in alias(self):
            exec(command)
    def my_method(self):
        pass
dir(MyClass)
        
Out[147]:
['__class__',
 '__delattr__',
 '__dict__',
 '__doc__',
 '__format__',
 '__getattribute__',
 '__hash__',
 '__init__',
 '__module__',
 '__new__',
 '__reduce__',
 '__reduce_ex__',
 '__repr__',
 '__setattr__',
 '__sizeof__',
 '__str__',
 '__subclasshook__',
 '__weakref__',
 'my_method']
In [148]:
test_class=MyClass()
dir(test_class)
Out[148]:
['__class__',
 '__delattr__',
 '__dict__',
 '__doc__',
 '__format__',
 '__getattribute__',
 '__hash__',
 '__init__',
 '__module__',
 '__new__',
 '__reduce__',
 '__reduce_ex__',
 '__repr__',
 '__setattr__',
 '__sizeof__',
 '__str__',
 '__subclasshook__',
 '__weakref__',
 'i_like_underscores',
 'littleAttribue',
 'myMethod',
 'my_method']


Other Utilities

In addition to the covered utilities there are ones to time, create html from HPBasic programs, and fix other small issues

In [149]:
from pyMez.Code.Utils.PerformanceUtils import *
In [150]:
@timer
def run_loop(length=20000):
    for i in range(length):
        time.sleep(.001)
In [151]:
run_loop()
The function run_loop started at 2019-01-24 17:43:31.070000 and ended at 2019-01-24 17:43:56.101000
It took 25.031 seconds to run
In [152]:
# tools to extract system and other metadata
from pyMez.Code.Utils.GetMetadata import *
In [153]:
get_system_metadata("./pyMez_Tour_Files/Solution_0.s4p")
Out[153]:
{'acess_time': '2018-12-19T23:16:12',
 'creation_time': '2018-12-19T23:16:12',
 'device': 0L,
 'group_id': 0,
 'ino': 0L,
 'mod_time': '2018-11-14T22:54:30',
 'mode': 33206,
 'number_links': 0,
 'size': 31218L,
 'user_id': 0}
In [154]:
get_metadata("./pyMez_Tour_Files/Code_Structure.png")
Out[154]:
{'acess_time': '2018-12-19T23:16:11',
 'author': None,
 'category': None,
 'comments': None,
 'creation_time': '2018-12-19T23:16:11',
 'device': 0L,
 'dpi': (150, 150),
 'gamma': 0.45455,
 'group_id': 0,
 'ino': 0L,
 'interlace': 1,
 'keywords': None,
 'mod_time': '2018-12-06T21:47:21',
 'mode': 33206,
 'number_links': 0,
 'size': 115910L,
 'srgb': 0,
 'subject': None,
 'title': None,
 'user_id': 0}