pydfmux changelog

Note that this should not just be a dump of the commit log.

This changelog doesn’t include major changes before May, 2016. Most major changes are (should be) done through pull requests. For information on events that predate this log, see the bitbucket merged pull request history.

March 2020: Further Python 3 Compatibility

A static version of tornado 4.5.3 is now imported directly from pydfmux.core, as a stop-gap measure until the core code can be updated to use the new asyncio package that ships with Python 3.

October 2018: Python 3 Compatibility

A number of changes have been made to make pydfmux compatible with python3.

  • Various formatting changes (print function, absolute imports)
  • Some packages have been reorganized and require different imports and slight interface adjustments (pickle, urllib, logging)
  • Dictionary keys(), values() and items() are iterators in python3, handle more carefully
  • Use open() instead of file() for file I/O
  • raw_input() and execfile() are deprecated
  • Stricter consistency in indentation and whitespace handling
  • Consistent handling of unicode and bytestrings; encode/decode to/from bytes as necessary
  • Careful handling of true division (“/”) and integer division (“//”)

Two changes require further detail.

First, pickle files generated in python3 are not backward compatible by default. To maintain compatibility (at least for the time being, which data created in python2 is still in use), two compatibility functions have been added that read and write pickle files that are compatible in all versions of python. Use load_pickle and save_pickle to ensure compatible pickle files everywhere.

Second, python3 introduced the asyncio package for asynchronous programming, and the tornado package (version 5.0 and newer) has been refactored to integrate asyncio into its IOLoop structure, with the intention of phasing out their own python2/3 compatible interface in later versions. Unfortunately, asyncio is fundamentally incompatible with the pydfmux architecture. For the time being, we require tornado <= 4.5.3. In the future, we will refactor pydfmux to integrate asyncio and aiohttp to handle IceBoard communication and asynchronous processing, which will necessitate dropping compatibility with python 2 altogether.

September 2018: API Improvements

Algorithms

All algorithms can now automatically filter out objects from the input query that are associated with IceBoards that are offline. This is handled by the filter_query keyword, which can be passed both to the @algorithm decorator (to set the default mode) and to the algorithm itself (to set the mode when the algorithm is called).

Querying by Relationships

Every query or Dfmux ORM object now has methods (really algorithms) for getting a query of all associated objects of a certain type, as defined by the hardware map. These methods are: get_bolos(), get_channels(), get_modules(), get_squids(), get_boards(), and a general get_objects().

Additionally, all queries also have a get_online() method that filters the query to include only objects that are attached iceboards that are accessible over the network.

Other API Changes

  • Consistent handling of pstring() methods of Dfmux objects with pathstrings. Results of polling the pathstring are cached in the hardware map to avoid unnecessary database access.
  • Accessing a single Column attribute from a query of ORM objects is now much more efficient. The individual columm values are queried directly, and individual ORM objects are not created.
  • A new hwm attribute for query objects that returns the parent hardware map session.
  • A new to_query() method for ORM objects that returns the equivalent query.
  • A new count_distinct() method for query objects that returns the number of unique entries in the query. This is a much more efficient equivalent of len(query.all()).
  • The zero_combs() asynchronous function has been cleaned up and is now a real @algorithm for ReadoutChannel, ReadoutModule, MGMEZZ04 and Dfmux objects.
  • A new logging.notice() convenience function for creating NOTICE level log messages.

September 12, 2016: SQUID Saturation Checking, some algorithm argument changes

SQUID Saturation Checking

The rail monitor has a method check_ADC_rail which now has the argument check_squid. The default is set to True, meaning everywhere the algorithms check for an ADC rail they also now check to see if the SQUID is saturated:

  1. First it looks for a recorded peak-to-peak value in the local HWM, and then remotely stored on the board. If it finds neither it assumes a default of 3.5mV
  2. It converts that peak-to-peak value to ADC counts
  3. It records 5,000 samples from the ADC (fast samples)
  4. It throws out the most extreme 400 points to avoid triggering on noise spikes
  5. If the peak to peak of the remaining points is >70% of the SQUID peak to peak. It changes the squid.state in the HWM and remotely on the iceboard to “saturated”.
  1. It then saves a pickle file of the data and the peak to peak to be used for debugging
  2. And raises an exception, just like it would for an ADC rail.
  1. If the peak to peak of the remianing points is instead <70% of the SQUID peak to peak and the recorded squid.state in the hwm is “saturated”, it changes that state to “unknown” and exits.

The exception is handled in the algs the same way any rail is.

If you see this happening and believe it to be in error, please send the pickle file output to Joshua M.

overbias_and_null

New argument: scale_by_frequency (default=False)

Scales the carrier bias amplitudes to correct for the frequency dependent transferfunction. If this option is selected, the HWM value, or value given in carrier_amplitude argument, will be treated as the “DC” value, so channels will be scaled up as a function of frequency according to the transfer function model.

This fixes the problem where higher frequency channels are less overbiased, and therefore lower frequency channels are often excessively overbiased.

When using this parameter you should use an amplitude about 30% lower than the one that worked without this argument enabled.

gain_stepping

New argument: fix_nuller_gain (default=False)

This keeps the nuller gain fixed while adjusting carrier gain. The result is that it leaves you with more dynamic range in the nuller at the end of the algorithm, at the expense of a small hit in noise.

take_netanal

Behavior change:

If the netanal is being taken with a target of both the nuller and the carrier (so, target=’both’ or target=[‘nuller’, ‘carrier’]) the selected amplitude will be used for the carrier network analysis, and then scaled down to a different value for the nuller network analysis.

This new value is calculated so that the current at the input of the SQUID approximately matches that of the carrier for a 10 Ohm comb when off-resonance.

This opens up a wider range of amplitudes that will have both sufficient S/N in the carrier sweep, and not saturate the SQUID in the nuller sweep.

August 12, 2016: New Logging Level, HWM Shortcuts, bolo algorithm changes

Bitbucket commits: fdeec7e through d7f54f0

These are primarily improvements rather than bugfixes.

General

Nearly all algorithms now have the very robust error handling – meaning that while you will see exceptions in the console and logging files when they occur, they should never interrupt a script by running any of the core algorithms. Instead you should ge summaries of failures and successes, and data products that indicate success or failure (data['outcome'] field), and also include full stacktraces.

If anybody encounters an exception that stops execution while running these algorithms, please get in touch with me or take out a ticket. This is no longer “normal” behavior for code even when hardware is going catastrophically off the rails.

Logging

I’ve added a new logging level between INFO and WARNING called NOTICE, and upgraded a subset of the previously INFO-level messages to NOTICE.

You can switch your console logging threshold to this new mode with pydfmux.set_console_level("NOTICE").

To use this yourself, the syntax is logger.log(level=25, msg="My logging message"). Note that the intuitive thing of logger.notice(...) will NOT work.

This only changes what gets printed to your screen, all logging messages are still saved to disk. I’ve left only what i think is strictly useful in there. It may be a little disconcerting for people who are used to having activity fly by.

Note

In general i’ve never seen pydfmux hang. If you lose connection with the board eventually a timeout will be called, but you can always check up on what is going on by looking at the pydfmux.log file updating. On OSX you can actually view this in the console app very conveniently (though it keeps the open logging file in RAM, so may eat up your memory if you leave it open for too long)

HWM Shortcuts

You can now obtain HWMQuery objects for bolometers associated with SQUID or ReadoutModule objects (either as the objects themselves or queries) using the method get_bolos(). It is also possible to pass the method a state parameter to filter on.

# Starting from a HWMQuery of SQUID objects
>>> bolos_all = squids.get_bolos()
# Starting from a single SQUID object
>>> bolos_onesquid = squids[0].get_bolos()
# Starting from a HWMQuery of ReadoutModule objects
>>> bolos_all = rmods.get_bolos()
# Starting from a singl ReadoutModule object
>>> bolos_onermod = rmods[0].get_bolos()
# Filtering on just bolometers that are overbiased
>>> bolos_filtered = rmods.get_bolos(state='overbiased')

Note

Filters will only work if the active in-memory HWM in that session has been updated with the parameters (IE, if it was that session which ran overbias_and_null. This will change once we start using the remote storage.

channels.measure_noise

This is now a considerably more powerful attribute of channel objects, and will return a dictionary keyed by channel number containing the samples, in-phase TOD array, ASD frequencies, ASD amplitudes, mean white noise level, maximum noise spike, and TOD-derived white noise level for that channel (with the correct transfer function for both DAN enabled and non-DAN channels).

This algorithm natively always computes these quantities for every channel on the module, so it is just as fast to run it on one channel as it is to run it on 64 channels. Using it in a for-loop for individual channels would be slow and unnecessary.

Note

the TOD-derived noise (std / sqrt(nyquist) ) is a slightly higher estimate than the ASD derived values, especially for non-gaussian noise. This effect is not more than a few percent typically. I advocate using the ASD derived values.

overbias_and_null

I’ve significantly improved some of the automated cuts that overbias_and_null can make on bolometer channels as it is attempting to overbias them.

Use the maxnoise argument to set a threshold white noise level for a given channel’s readout bandwidth in pA/rt(Hz).

This is helpful to prevent trying to overbias and operate a bolometer that lands in a region of very high noise. bolos.overbias_and_null(maxnoise=20) is a reasonably strict threshold. Something like 50-100 is more permissive but will cull the ones that are obscenely bad and often cause problems.

When using the cold_overbias=True flag it now inspects whether the bolometer parasitics measured make physical sense, and removes channels with unphysical values.

This has been my primary way of determining cold-overbiasable channels by hand, and turns out to be very effective. As an aside, sometimes these channels can be recovered later by running the overbiasing algorithm on just them, often requiring a higher amplitude.

Warning

Since the algorithm is now empowered to refuse to overbias bolometers that fail these cuts, it is important to re-query your bolometers on their state before moving to the next step. This will not be necessary once the remote state information is implemented (soon), but I still think it is important to make explicit. See the example below.

May 27, 2016: New !ChannelMappings csv option for LC pathstrings

Bitbucket commit: 711d02f

Problem

We currently have 64x multiplexing firmware and 68x LC Chips. While the !LCBoard csv supports 68 channels, the !ChannelMappings csv assumes a perfect 1-to-1 mapping between readout channel number and LC channel number, and therefore doesn’t address LC channels at all, only LC Boards. This means that lc channels 65-68 cannot be addressed without altering the ordering of the !LCBoard csv file.

Current !ChannelMappings csv format:

lc_board             bolometer        channel
My68xBoard           arg1a/1A.1.X   005/1/1/1/1

This maps the first LCChannel in the My68xBoard LCBoard csv to the first readout channel of board 1, mezzanine 1, module 1, in crate 005. Since there is no readout channel 65-68, there can be no mapping to LC channels 65-68.

Solution

You now have the option of using lc_path instead of lc_board in the column heading for the !ChannelMappings file. This lc_path is a pathstring like those already used to address bolometers (Wafer/Bolometer) or readout channels (crate/slot/mezzanine/module/channel) in the !ChannelMappings file.

New (optional) !ChannelMappings syntax like LCBoard/LCChannel:

lc_path             bolometer         channel
My68xBoard/68       arg1a/1A.1.X    005/1/1/1/1

This maps the 68th LCChannel in the My68xBoard LCBoard csv to the first readout channel of board 1, mezzanine 1, module 1, in crate 005. You can still only map 64 LC channels to readout channels, but can now select which set of 64 to use.

May 18, 2016: On-Board Remote Storage/Retrieval of Tuning State

Pull Request #17

Detailed overview: https://bitbucket.org/winterlandcosmology/pydfmux/pull-requests/17/added-remote-store-and-retreive-state/diff

Merge Commit: 82e8a0c

Features require DfMUX Firmware Release 7 or above (May 17, 2016)

Problem

Dynamic state information is available in the in-memory SQL hardware map when running tuning scripts. These include the following parameters:

SQUIDS:
  • state
  • transimpedance
Bolometers:
  • state
  • rlatched
  • rnormal
  • rfrac_achieved

Unfortunately, these properties are only available within a single python session (which ran the algorithms to update them). If you close one python session and start another you lose the ability to determine which bolometers or SQUIDs are tuned.

This is particularly problematic for the Housekeeping daemon, or any modular code that is trying to succinctly keep track of the state of the instrument, since the only way for it to access this information is by parsing algorithm output files.

Solution

There are now a set of Tuber functions that allow limited bits of information to be stored remotely on the iceboards and retrieved.

Information can be stored/retrieved at the SQUID and Bolometer level, and so the functions are methods of these objects.

They are the following:

>>> squid.store_squid_state(squid, state=None, transimpedance=None, overwrite=False)
>>> squid.retrieve_squid_state()

>>> bolo.store_bolo_state(bolo, state=None, rlatched=None, rnormal=None, rfrac_achieved=None, overwrite=False):
>>> bolo.retrieve_bolo_state()

Note

If only a subset of the arguments are used when calling the store_* methods, any existing information outside that subset is preserved. If overwrite=True it resets all of those properties to None.

The retrieve_* functions return TuberObjects whose attributes are the arguments you see above.

>>> bstate = bolo.retrieve_bolo_state()
>>> bstate.state
'latched'

Warning

There is no way currently to enforce that the remote information is synced with the local HWM information. You must remember to update both the HWM state AND the remote state whenever code you write changes those quantities.

May 06, 2016: Add SQUID Bias Parameters To The HardwareMap

Pull Request #14

Link to page (very little discussion on bitbucket): https://bitbucket.org/winterlandcosmology/pydfmux/pull-requests/14/squidbiasprops/diff

Commit: a4072f4

Problem

Want to be able to add default bias properties, and special bias properties, to individual SQUID Objects in the hardware map. Previously, tuning SQUIDs that required different bias parameter values wasn’t possible without running the SQUID tuning algorithm more than once.

Solution

SQUIDs now have every bias properties required by the tuning algorithms added to their objectsin the hardware map:

  • flux_start
  • flux_stop
  • flux_increment
  • squid_bias_reference
  • squid_bias_start
  • squid_bias_stop
  • squid_bias_increment
  • minap2p
  • force_bias_at_reference

There are two different ways to assign these SQUID tuning bias properties to specific SQUIDs in your hardware map.

Method One: Directly To The YAML.

Below is an example with a benchtop hardware map that shows how SQUIDs that need particular tuning parameters may be given them

hardware_map: !HardwareMap

    - !Dfmux
        serial: "0067"
        mezzanines:
            1: !MGMEZZ04
                squid_controller: !SQUIDController
                    serial: 06-01
                    squids:
                        1: !SQUID {'squid_bias_reference':2, 'minap2p': 0.05}
                        2: !SQUID {'squid_bias_reference':1}
                        3: !SQUID {}
                        4: !SQUID {}
            2: !MGMEZZ04
                squid_controller: !SQUIDController
                    serial: 06-02
                    squids:
                        1: !SQUID {'flux_start':1.5, 'flux_stop':3.0}
                        2: !SQUID {'flux_start':1.5, 'flux_stop':3.0}
                        3: !SQUID {}
                        4: !SQUID {}

Method Two: With a JSON file

Similar to the option to provide bolometer bias parameters with a JSON file instead of a CSV (- !JSONBiasProperties "json_mappings.json") we can do the same with SQUID bias properties using the following YAML directive at the bottom of the file:

- !JSONSQUIDBiasProperties "json_squid_props.json"

The JSON file then contains entries keyed with pathstrings to the readout module the SQUID is attached to:

0067/1/1: {
    squid_bias_reference: 2,
    minap2p: 0.05
}

0067/1/2: {
    squid_bias_reference: 1,
}

0067/2/1: {
    flux_start: 1.5,
    flux_stop: 3.0
}

0067/2/2: {
    flux_start: 1.5,
    flux_stop: 3.0
}

Default Bias Properties

Default bias properties for SQUIDs are provided in the same way they are for bolos.

In the YAML file the directive at the top of the file:

default_bias_properties: !DefaultBiasProperties
    Bolometers:
        overbias: True
        tune: True
        rfrac: 0.75
    SQUIDs:
      squid_bias_reference: 0.7
      minap2p: 0.1

Or as a JSON file that is imported at the top of the yaml with:

default_bias_properties: !DefaultBiasProperties "default_bias_properties.json"

such that default_bias_properties.json is:

Bolometers: {
  overbias: True,
  tune: True,
  rfrac: 0.75,
}

SQUIDs: {
  squid_bias_reference: 0.7,
  minap2p: 0.1
}

Warning

Algorithm Behavior Change.

If SQUID tuning parameters are available in the hardware map because you’ve used any of the above options, the tune_squid algorithm will prefer them over arguments it is passed.

To override this behavior use the argument ignore_hwm_bias_properties=True, as in squids.tune_squid(. . ., ignore_hwm_bias_properties=True)