Malleables VIs: Practical Example Part 1 – OO Hardware Interfaces for LabVIEW/TestStand
In my first LabVIEW-related blog post, I introduced Malleable VIs, a new feature in LabVIEW 2017 allows the creation of generic VIs that can adapt to different data types on the connector pane.
In this two part post, I wanted to give a practical example from a current project I am working on to show some of the power of Malleable VIs. Part One (you are here!), will introduce the project and how I am using LabVIEW Classes (OO) to create Hardware Abstraction Layers (HALs) in LabVIEW for use with a TestStand test system. Part Two delves further into the implementation and show how I am using the Class Adaptation feature of Malleable VIs in the development of the test system.
If you’re not using LabVIEW Classes in your projects, I really hope that you’ll continue to read on as using classes for hardware interfacing is to me one of the most powerful aspects of LabVIEW OO programming – allowing you to very quickly prototype code (through the class public interface/API) and switch between simulated/real hardware in your software.
I am rewriting the software for a test system for optical light sources using LabVIEW & TestStand that currently just uses LabVIEW (blog post idea! ‘When, and when not to, use TestStand’). The main reasons for rewriting the test system with TestStand are:
- To allow more flexibility for supporting new product codes
- To alleviate the requirement of a LabVIEW Software Developer to make any changes to the test (e.g. test sequencing, test limits, some test parameters etc.)
The test fixture communicates with a number of bench-top instruments, most of which are connected via GPIB. The instruments include optical measurements/controllers such as Optical Power Meters (OPM), Laser Diode Controllers (LDC), Optical Spectrum Analysers (OSA) alongside more standard instruments such as DMMs, Scopes (OSC) etc.
Working remotely for the client would have made it difficult to recreate the test setup locally (I’d rather not have 8 or so large bulky instruments on my desk, thanks!) to test/run the LabVIEW code modules and TestStand sequences, so it was always my intention to have a simulation interface so that I can run the entire sequence without any physical hardware.
I wanted the simulation interface to have the following features:
- UI Panels for each instrument showing the current state of the instrument (to check it has been configured correctly)
- Injection of errors to simulate faulty/missing instruments (to check test sequence logic / error handling)
- A timestamped log of commands & responses for each instrument and a global log for all instruments (to check timing/sequencing of instruments/commands)
- A method to load simulated data sets (to check data analysis code modules & limit testing in the sequences against known good/bad data)
Benefits of Simulating Hardware
Having a ‘simulation mode’ for the hardware not only benefits me during development of the test sequences, but also offers a number of additional benefits to the client that makes it worthwhile to invest the time developing them:
- Test Development / New Product Introduction (NPI): The process engineers (those who develop the tests for the products) can develop/run their sequences in simulation mode at their desks without using up valuable time on the test fixtures and without risking damage to the hardware.
- Test Process Improvement: The simulation log allows the process engineer to see exactly how the test sequence is interacting with the test hardware and the unit under test (UUT), as well as the sequencing/timing of test steps. This can guide improvements to the test process such as to reduce test time – one of the key objectives of this project.
- Test Validation: The test can be run in ‘simulated’ mode and the results compared against a baseline. This can include injecting known data for good and bad parts (e.g. golden sample data) and checking that the failure modes are correctly detected by the test. This also paves the way for setting up automated testing of the test sequences themselves (e.g. continuous integration).
- Configuration Management: The logs can be used as evidence that changes to the test sequences or code modules have not had unintended consequences elsewhere in the test.
I’m sure there are plenty of other great reasons for simulating hardware (let me know in the comments) but hopefully even just from the examples above it is easy to justify making the investment of a small amount of time for it (building technical wealth!).
Hardware Abstraction Layers
To implement the ‘simulation mode’ in the software/sequences, I chose to use LabVIEW object-oriented programming to create Hardware Abstraction Layers. I won’t go into the detail about HALs and their benefits as there are plenty of great posts and white papers about the subject. I am creating an abstract class for each instrument type (e.g. DMM, OSC, LDC etc.) which contains the various functions/methods used in the code modules / test sequences. I then write test sequence/code modules to call the methods from the abstract class but, using dynamic dispatch, an implementation class is substituted in at run-time.
Here is a simple example of dynamic dispatch in LabVIEW for the Laser Diode Controller (LDC) in the project:
For this project, it was fairly straightforward to take the functions from the existing code (which also supported multiple instruments, but using a hardware-specific case in a case structure to determine the implementation) to define the methods required for each instrument.
Once I had developed the structure of the abstract instrument classes, I was then able to develop the LabVIEW code modules and test sequences using the abstract classes, essentially creating test ‘stubs’ that didn’t actually do anything when called. Typically, you might have your abstract methods generate an error to prevent the abstract class from being called directly or use the ‘requires override’ flag to defend against ‘missing’ a required implementation VI.
I did things in this order (top-down) because to me, the complexity in this project was in transferring/replicating the functionality of the current LabVIEW application so I wanted to focus on that first (identifying and implementing the different test recipes and test steps). Secondly, at the time I started writing the sequences, I wasn’t sure which hardware would need to be supported for the initial commissioning of the test system (some is legacy hardware, some is hardware used only at a contract manufacturer etc.).
Side-note: Calling Classes in TestStand
LabVIEW 2012 with TestStand (not sure of the TestStand version required) supports dynamic dispatch on class method calls. This powerful feature allows me to call the methods from the abstract class in TestStand (thereby creating hardware agnostic sequences) and pass the class reference object (read: class wire) around the test sequence. I can also pass it through another VI/code module that might perform a more advanced test step.
Below is an example of this from one of the test sequences have developed for using an Optical Power Meter for finding a power range from the UUT. I call the OPM.lvclass methods directly to configure the instrument, but then pass in the OPM reference to another code module which uses it to perform the power search. This allows the hardware setup to be changed in the test sequence without needing to change the LabVIEW code.
Simulated / Physical Implementations – Common Functionality
Now that I have my test sequences developed using the abstract classes, I need to think about creating the implementation classes for the simulated instruments and then for the physical instruments.
Creating the implementation classes for the physical hardware was relatively straightforward for this project. Since the code already existed to talk to the instruments and I had used the existing VIs as the template for my abstract methods all I had to do was copy + paste from the existing VIs and perform some housekeeping (tidying up the code, documenting, removing global variables).
I had to think a bit more about the architecture for the simulated device classes – it needed to launch a UI Panel on ‘Initialise’, for it to shutdown on ‘Close’ and then command+response communications between the various simulated methods and the UI Panel.
This leads to something that looks like the following UML Class Diagram:
In both cases, there is significant common functionality between the various simulation implementation classes (Launch/Close UI Panel, Send Message+Response) and also between the various GPIB instrument classes (e.g. GPIB Write+Read). This doesn’t fall into the typical parent-child class hierarchy (and LabVIEW doesn’t support inheritance from multiple classes) – I wouldn’t put the simulation-related/GPIB-related functionality in the parent class.
Therefore, you would either need replicate this functionality in each class (no code re-use) or call functions from a shared library.
In Part Two, you will see how I am using Malleable VIs to take the ‘shared library’ idea and improve on it for tidier and more readable code.
Continue Reading in Part Two ->
Hi Sam, I discovered your blog, great content :). I have a question about the usage of LV classes with TestStand, I realized that you are using VI Call for your methods. Is this call mode compatible with Dynamic Dispatch? I tried of using the Class Member call but I had slow performance of TestStand even when my class was small and it is something that I need to avoid. So my question is, in your experience using VI Call is enough for most of the cases and which drawback could I expect?
Thanks for your attention sir.
That particular screenshot that is not a class method – it is a VI which wraps some of the low-level methods to perform a measurement. I believe you must use Class Member call for Dynamic Dispatch.
In our test system, the classes are actually compiled into Packed Project Libraries – this gives many advantages (but there are some pitfalls) but one of the main ones is improved load/execution time (as your class is contained in a single file on disk and is already compiled).
I don’t remember having any issues with calling the classes directly – what kind of performance are you getting vs. what are you looking for? Are you talking about loading times, or execution? Do you maybe have a problem with dependencies which is causing the slow performance? Just some things to think about!