Notes on CCU/SCU Testbed

Note: This pseudo-random stream of thought is for entertainment only!

Introduction

The servo project will be developing a number of evolving control algorithms and plant model simulations for use on the GBT. A 'testbed' has been described in five different configurations, they are:

Level 1:

This is foreseen to be a Matlab/Simulink environment, running on controls engineer's workstation. At this level the design is conceptual.

Level 2:

At this step, the level 1 design is converted to a time-domain representation. There are at least three three steps at this level:
  • 2(a) Conversion to time-domain representation
  • 2(b) Export/Compilation or custom coding of algorithm into a library module for two targets: Windows and RTAI/Linux.
  • 2(v) Import of DLL module and verification against native Matlab code

Level 4:

This configuration utilizes the new framework being developed for this project, running the prototype control in real-time against a plant model emulation. All I/O is virtualized, and the control module is played against a real-time simulation of the plant.

Level 3:

This is the same as level 4, but with an added virtual-time capability and debugging tools to support 'single-stepping' and C++ level debugging. (Currently this step is optional)

Level 5:

At this level the servo control computer inputs and outputs are connected in the same way they would be at the telescope. For example, servo amplifier signals would be sent to real D/A devices. The plant simulator would require additional hardware to 'mirror' the servo computer. The main difference here is that all I/O with the plant are sent via the interfaces used in the final deployment. This would shake out any timing difficulties etc.

Development Flow:

The development flow between levels 1&2 in the Matlab environment, and the 3-5 in a C++/NRAO environment is currently TBD (in tall letters). Suggestions have been made to use Simulink's C-code generation utility.

Algorithm testing a development is done at levels 1&2; the primary intent of levels 3-4 are for integration of the algorithm with the balence of the CCU software. Issues such as determinism and computability are analyzed. Level 5 focuses further on timing, and includes the use of signal conditioning hardware. A temporary but useful configuration of level 5 might be to run the system "in parallel" with the current system.

Other Architectural Notes

It is important to understand that we have do not yet have a firm architecture in place, and are still investigating a number of options. Generally we divide the code into four distinct but related pieces:
  • Common Framework Services - The 'generic' support services which provides a rich toolset and foundation to build.
  • Control Algorithm 'Plug-ins' - Specialized code which implements a control algorithm. Since algorithms are envisioned to evolve over time, it seems important to decouple them from the core code.
  • Device Specific Code - Code which has knowledge of hardware or an interface. Examples might be: a digital I/O driver/library; an interface module which provides the ACU interface; etc.
  • Configuration Recipe - This is a textual or XML file which expresses the number of instances of an object, and its relationships with other objects and services in the system.

Common Framework Services

There are a number of common functions which will be required in all versions of the new servo. We often refer to these common functions as a 'framework'. Some ideas for various types of infrastructure are listed below.

Real-time threading service and dispatching

The framework should be able to run various tasks at a variety of periods and priorities. It would be nice for modules to be able to express their timing needs and then register a method to be called at a given periodicity and priority. Hopefully this will limit the amount of OS-specific code. Running tasks in this manner also easily allows monitoring the tasks to verify timing.

Run-time monitoring

As mentioned above, the framework should be able to answer the question: "are tasks running often enough or taking too long?"

Object library loading

This concept is similar to the 'container' concept. Application code specific to a given axis is written in library form, and loaded dynamically by a standard program. This program constructs instances of class objects from a textual (or xml?) 'recipe'. The recipe is a description of object relationships, and some parametric data such as default initialization values. This concept is an example of the 'Builder' design pattern.

A counter point here is that sometimes if overused this can make the system more opaque to maintainers. (i.e. Examining the code, one sometimes wonders how certain miracles occur. An extreme case being that some important checks suddenly are inhibited by editing a config or XML file.) Therefore there should be some reflection on how important infinite reconfigurability is, and exercise good judgment.

Digital signal processing

The low-pass filtering of digitized analog data should be standard construct. This would need to be coordinated with the multiplexing of I/O.

I/O Multiplexing, Scheduling, and Aggregation

At some point the system will interface to hardware. This access should be coordinated to be efficient, and to avoid resource conflicts between other tasks competing for access to the same hardware.

This area seems like an important one for being able to route I/O from sensor inputs to various modules in the system.

Virtual Time Service

It may be useful to be able to slow-down virtual time, to run the system in a debugging mode. This would permit analysis with a sophisticated structural model at less than real-time rates. Note that this interacts with task scheduling mechanisms.

Inter-Axis Messaging (TBD)

This 'service' provides a mechanism to sense inter-axis interlocks, and a way to publish axis state information for other axes. A possible example here is the subreflector axes, which really must operate as a unit, but have individual axis loop control.

Logging Services

A number of categories here such as:
  • Control loop data logging, triggered by events
  • General status & fault logs
  • User command logs

--++++ Device Specific Code As the title implies, this is where the implementation details of a specific hardware device, or external interface. Device specific code acts as an adapter between an API that a code or device driver requires, and the framework's multiplexing/messaging interface.

Axis Specific Code

Here is where the 'rubber meets the road'. The central goal of the axis will be to make the axis follow the command input. Each axis will have:
  • a control-algorithm
  • a command input
  • one or more inputs
  • one or more outputs
  • a status/interlock checker

CCU to Plant Simulator Interfaces??

The CCU will have a limited number of outputs which the plant simulator will need to accept. The plant will also need to generate a number of feedback signals. Since the control algorithm will be developed over time, the plant model will require simular revision-controlled pairing.

CCU Plant Simulator Feedback to CCU
drive motor I_cmd[n] I_cmd[n] I_feedback[n]
    V_feedback[n] (tachs)
    P_feedback (encoders)
    Armature[n] (V,A)
    Field[n] (V,A)
??? Disturbance Input(s)  
Trajectory_cmds    

Code Partitioning:

My current thinking is that we will support the dynamic loading of control algorithms, and some dynamic I/O configuration; however, the more static areas (like status/fault/interlocks which exist in hardware) will be treated as such. I don't like the idea of dynamically loading status processing modules, due to the possibility of missing a module or input. This also goes for the plant simulations. There will be a static section of code, which will emulate the status of the current hardware and PLC.

-- JoeBrandt - 31 Mar 2008
Topic revision: r5 - 2008-04-04, JoeBrandt
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback