Flagdata Refactoring

Current desing

The following diagram shows the current design implemented for flagdata. As you can see:
  • The columns refer to the 'layer' from task to agent, passing by measurement set.

  • The rows refer to the chain of sequential steps that take place while running the task.

  • The horizontal rows indicate sub-sequent calls between different layers

  • The vertical rows indicate sequential steps within a layer

We think it is useful to have this diagram as starting point, so that we can map the re-desing requirements with the classes and methods in the current implementation that have to modified.


FlaggingClassSchema.png

Requirements analysis

The following list of requirements comes from the first iteration, started by Jeff Kern, in which the main goals were set. Shortly afterwards this list was extended and complemented in a second iteration by Urvashi RV. Additionally we include our comments in order to determine how the top level requirements can be broken down and mapped to new classes and methods.

Data handling

Requirement 1: Refactor data handling encapsulating all the functionality in one single class generic for MSs, Single Dish and Calibration Table.

  • Jeff Kern:

    • Refactor the read data method to be a Flag Data Handler. Thus the Data Handler class will be responsible for pulling the data out of the Vis Buffer. One intention here is to have the Flag Data Handler class be such that this could be used by the Single Dish side of things as well.

  • Urvashi RV:

    • LightFlagger will be the main class that provides access functions for the tool layer. It holds an MS, a MS-Selected reference MS, a Vis Set, and a list of LF Base objects. It's "run" method sets up the visibility iterator, reads data and flags from the visibility buffers into a local array, and passes these arrays by reference to all the agents (in a loop). After all agents are done, it runs the display/statistics agent, and then optionally writes the flags back to the MS (via the visibility buffers).

    • We need to support different types of input data – regular Measurement Sets, single-dish data, and perhaps calibration-tables. All three can have type-specific flagging methods (MS's have shadowing, Cal-tables have Tsys-based flagging, Single-dish has something else), which will require meta-data access from inside the flag methods. There can also be generic autoflag algorithms that do not require any meta-data, and can run just on 2D arrays of numbers (e.g. visibility amplitudes as a function of time and freq, or calibration gain solutions as a function of time/freq, etc). These generic methods should be able to operate on any kind of input data.

    • To supply meta-data for MS's, we had suggested sending in a " const Vis Buffer " to the flagmethod.runMethod(), but this is too restrictive, because it won't work for single-dish or caltables, and not all flag-methods require it. Instead, perhaps we could make only the data-handler be passed into runMethod(), but depending on the type of input data, there is access to different meta-data (for example, the data-handler for MS's will contain a const reference to the visbuffer.... ). Algorithms that don't need meta-data, just won't read any of it from the data-handler, and will be able to run on any type of input data.

    • I will reconnect the Time Freq Crop, Extend Flag, and Display agents to your new interface once it is ready. I am using Light Flagger::read Data And Flags() only as a temporary solution, so that I can focus on those three agents while you and Sandra are building the full framework. As long as your data-handler allows agents to pull out and fill arrays of data and flags, all the agents I'm writing will remain happy.

  • Justo Gonzalez

    • Right now the functionality to read, write and access the data is spread across several classes.

      • Three of them are instantiated as members of the flagdata tool class (Flagger.cc):

        • Measurement Set: First layer that handles I/O operations between file system and CASA.

        • MS Selection: Second layer based on Ta QL that determines what particular data from the MS has actually to be used (therefore has to be read/re-written)

        • Vis Set: Third layer that groups sets of visibilities (i.e. scans) and generates the corresponding iterators (Visibility Iterator) to go through them.

      • Another two are instantiated 'on the fly' in the Flagger::run() method:

        • Vis Buffer: Fourth layer that stores all the data for a given scan.

        • RF Chunk Stats: Interface layer between the Visibility Buffer and Visibility Iterator. It provides a centralized point for flagging agents to look up the data

      • Finally, the RF Data Mapper, which provides derived values from a set of complex visibilities, is instantiated from within the agents for each new chunk.

      • The following sub-diagram shows how these classes are encapsulated:

        Flagger.png
    • Therefore, one proposal to re-factor the data handling would be the following, which is different from what Light Flagger implements at the moment, but would be compatible with the Flag Data Handler concept:

      • Flagger tool will not longer have a collection of members and dynamic objects from classes related with data handling (Measurement Set, Measurement Selection, RF Chunks, Visibility Buffer, …). Instead one single object from the new class Flag Data Handler will encapsulate all the data handling functionality.

      • The new Flag Data Handler class will implement methods to loop through the data, and access the current data cube in different ways as in RF Data Mapper. Therefore, all the Data Mapper objects created within the Agents are not longer necessary, and will have to be replaced by calls to the data mapping functions within the Flag Data Handler class.

      • Additionally, the new Flag Data Handler class will handle internally the single dish and calibration table cases so that it is transparent for the Agents, i.e. the same functions, with the same interface, that can be used to read, iterate, access and flag MSs will be valid for these cases as well

      • The Agents will not require any longer a reference to a RF Chunks object, instead they will require only a reference to an object of the new flag Data Handler class.

    • I have seen that Light Flagger implements an approach for data handling similar to the previous Flag Data versions (i.e. it holds an MS, a MS-Selected reference MS, a VisSet etc). What we are proposing is that now this is all moved to the new Flag Data Handler, and the same for the Flag Cube, and Data Mappers that are now specific for each Agent (i.e. each one per Agent) and will became common entities to be shared among all the Agents. This will imply several changes in Light Flagger, is this ok for you?

  • Urvashi RV:

    • Yes it will, and this is ok. I will reconnect the Time Freq Crop, Extend Flag, and Display agents to your new interface once it is ready. I am using Light Flagger::read Data And Flags() only as a temporary solution, so that I can focus on those three agents while you and Sandra are building the full framework. As long as your data-handler allows agents to pull out and fill arrays of data and flags, all the agents I'm writing will remain happy.

Data selection

Requirement 2: Add an intelligent level of automatic data selection

  • Jeff Kern:

    • To be clear I think we take the union of the “automatic” selection criteria and intersect that with the user specified. For instance if the user selection is Scan 7 and the automatic selection is only SPW 1 then we should flag SPW 1 of Scan 7. I think putting the hooks in for this should come first and then we can start making the agents smarter.

  • Justo Gonzalez:

    • It is not clear to me me if we want to take this to the read level or the iteration level. If the idea is to constrain what data is read then we have a strong dependency on Ta QL, which is what the Measurement Selection class actually uses for reading the data. I have to see in the example that you mention, if the only data read by Ta QL is [SPW 1 of Scan 7] or on the contrary [all the SPWs from Scan 1 + all SPW 1 from all the other Scans]. If Ta QL does not implement the first 'intersection' case, then what we can do is implement some logic within the new Flag Data Handler class, so that we only read Scan 7 and then we create a sub-selection of SPW 1 based on the previous Scan 7 selection.

  • Urvashi RV:

    • I think TaQL can handle all the required intersections/unions - have you checked using the tb.query() function ? Also, perhaps I'm misunderstanding this, but why do we want the intersection and not the union of what all agents ask for ?

  • Justo Gonzalez:

    • Regarding the intersections, our example comes from Jeff, and we understood that the idea is to use an intersection instead of an union, but Jeff can confirm if this is correct.

  • Jeff Kern:

    • I think what we want is: U intersect (union(A1,A2,..., Ax)). As a concrete example is the user selection is SPW 1 and the first agent is going to flag Scan 1 and the Second is going to flag Scan 6 then the data selection we want is SPW 1 of Scan 1 and Scan 6.

  • Urvashi RV:

    • Why not eliminate 'U' altogether, and do only ' union(A1,A2,..., Ax) ' ?

  • Jeff Kern:

    • After talking this over with Urvashi I think we agreed that what I said (U intersect (union(A1,A2,..., Ax))) is what we want. But all agents besides the manual one will want the entire data set so he union becomes the entire data set and the intersection collapses back down to simply U. Perhaps I'm being unnecessarily general here, but I was trying to be flexible.

  • Sandra Castro:

    • The manualflag mode of flagdata2 already does the intersection. In the following example:

flagdata2('ngc5921.ms',selectdata=true,scan='3',manualflag=true,mf_spw='0:1',clip=true)

when selectdata=true, every mode will flag only scan=3. Since manualflag has its own selection parameters, it will manually flag only channel 1 of spw 0 in scan 3. The whole scan 3 will be flagged using RFI mode instead. I guess you want every other mode to do the same as manualmode already does in flagdata2, is this correct? In order to do that, we will need to include selection sub-parameters to every mode as it is now for manualflag. Is this what you want?

Sorry, I meant rfi when I wrote clip. But a better example would be the following, in which case, it will manually flag only channel 1 of spw 0 of scan 1 and rfi flag scans 1,2 and 3.:

flagdata2('ngc5921.ms',selectdata=true,scan='1~3',manualflag=true,mf_scan='1', mf_spw='0:1',rfi=true)
  • Urvashi RV:

    • True - but it would still make 3 agents within one call to flagdata2. And - please correct me if I'm wrong - but I think our users find even this too complicated. They will call flagdata2 separately for every piece they want to flag.

  • Sandra Castro:

    • No, in this case it will make only one agent, because manualflag uses the "vector mode", which combines all the selections into one call to one agent. See the output below (which can be achieved by setting the variable dbg=true, inside Flagger.cc). See that only two agents are created, one for manualflag (with all spw selections combined) and one for quack mode.

      flagdata2('ngc5921.ms',selectdata=true,scan='1~3',manualflag=true,mf_spw='0:1~3;8~10;40~60',quack=true)

      2011-06-21 11:16:05    INFO    Flagger::setdata()     By selection 22653 rows are reduced to 12447
      2011-06-21 11:16:05    INFO    flagdata2::::casa    Flagging in manualflag mode
      adding agent 0 to record
      2011-06-21 11:35:20    INFO    flagdata2::::casa    Flagging in quack mode
      adding agent 1 to record
        0: {
          id: String "select"
          spwid: Int array with shape [1]
            [0]
          chan: Int array with shape [2, 3]
            [1, 3, 8, 10, 40, 60]
        }
        1: {
          id: String "select"
          scan: Int array with shape [3]
            [1, 2, 3]
        }

I don't know if the users have been complaining about the complexity of flagdata2. Remember that less than a year ago, they requested flagdata2 to have this interface (CAS-2281, CAS-2249). A couple of months later, flagcmd was requested (to be based on flagdata, NOT flagdata2); this was described in CAS-2416. In our understanding what the users want now is to be able to write flagcmd inputs with simple data selection syntax, which internally (transparent to the user) will be optimised in the data selection layer. Is this correct? Here are our comments/questions.

  • Urvashi RV:

    • Yes. Thanks for clarifying what manualflag already does. Perhaps Jeff can clarify what the cause of slowness complaints was. It looks like we're already doing it in an optimized way.
  • Sandra Castro:

    • Are we going to keep the three tasks, flagdata, flagdata2 and flagcmd? Or do we want to merge the capabilities of flagcmd into flagdata2 or flagdata?
  • Urvashi RV:

    • Merge flagcmd into flagdata2.
  • Sandra Castro:

    • Please, confirm if what we say next is actually what we want to have. We want flagdata2 to be able to read several inputs from flagcmd and make a union of all the selections specified in that call. This final union will be used to read the data only ONCE. This selection will then be given to the requested agents for processing. Is this the "automatic selection" mentioned by Jeff previously? An example would be: NOTE: Editing the following example for correctness (Sandra, 21/7/2011).
flagcmd input:
--------------
scan='1~5' spw='0:1~5'
scan='1~5' mode='rfi'
scan='8~10' spw='0:40~60'
scan='8~10' mode='quack'
scan='2~3' mode='clip'

The flagcmd task will make the union of the above selections. In this case the union will be:

--> select scan='1,2,3,4,5,8,9,10', all spws (because clip=true wants all spws). 

This selected MS will be used by all agents. Each agent will loop through only what it needs to. For example:

- Manualflag agent will loop through scans 1,2,3,4,5 in the selected MS and flag channels 1~5 of spw=0. The second manualflag agent will loop through scans 8,9,10 and flag channels 40~60.

- The rfi agent will loop through scans 1~5 and all spws in the selected MS.

- The quack agent will loop through scans 8~10 and all spws in the selected MS.

- The clip agent will loop through scans 2,3 and all spws in the selected MS.

If this scenario is what you have been describing, we can propose the following.

- MS will be read only once (using the union of all user data selections).

- Each agent will independently loop through the selected MS using only the portions of data it needs to. This will be useful if we want to parallelize the work in the level of agents. If it happens that at least two agents are much slower than the others, the gain will be noticeable.

  • Urvashi RV:

    • Yes.

Data iteration

Columns sort order

Requirement 3: Ensure we only iterate through what we need to.
  • Justo Gonzalez:

    • We are proposing that each Flag Agent has its own (optimized) Visibility Iterator, meaning that the chunks can be generated in the order that is most convenient for the Agent's algorithm.

  • Urvashi RV:

    • Autoflag agents will usually need to operate in sequence, where the flags from one agent are used as the starting point for the next one. That's why the original suggestion was to share a flag cube (through a single data-handler that goes into all agents in succession).

  • Justo Gonzalez:

    • You case can also be covered by adding a 'getVisBuffer' function in the base class:

agent1.flag()
agent1.flush()
visBuffer = agent1.getVisBuffer()
agent2.setVisBuffer(visBuffer)
agent2.flag()
...

However, do you think it is better to use independent iterators/visBuffers or a common one? We came up with this idea of independent iterators/buffers because maybe for some agents it is better to iterate over some kind of chunks (e.g. (scan,SPW)) and for other over some a different type of chunks (e.g. (only time/scan)). Maybe the gain for each agent is not so relevant and there are some drawbacks (e.g. more memory usage). Let us know what approach you think is more convenient, we can easily change the design at this point.

  • Urvashi RV:

    • This means each agent has to write to the vb.flagCube. We need to avoid this if we operate in ' writeflags = False ' mode, where new flags are only calculated and counted-up, but not written to the visbuffer. The simplified mechanism implemented in lightflagger shares one copy of the flag cube (no merging-code required). However, this assumes that all agents will want the same data in the same shapes, etc...

  • Justo Gonzalez:

    • You are right, and in principle we can add a method to set directly the flag cube in the flag agents, so that you get the computed flag cube from one agent, and pass it on to the next one without having to write the flags.

    • However it looks like this approach of different iterators/chunks/visBuffers brings more disadvantages (memory, issues with consolidated flagging) that advantages (flag() method optimized for each Agent). So if you prefer we can implement the following:

      • Common iterator kept in the Flag Data Handler

      • Common visBuffer kept in the Flag Data Handler

      • prevChunk/nextChunk methods defined in the Flag Data Handler

      • Set of flag cubes (one per agent) kept in the Flag Data Handler

      • setFlags() method defined in the Flag Data Handler will set the 'accumulated' flag cube in the Measurement Set object.

      • The flag() method of the agents will set the flags in both, the 'accumulated' and 'specific' Flag Cube.

      • The flag() method of the agents will be implemented for a generic chunk.

      • The run(dh) method of the agents will simply retrieve the visBuffer from the dh and pass it on to the flag() method.

In this way you can have the consolidated flag cube (common to all the agents), and the specific flag cubes for each agent without requiring any merge-split code or writing the flags.

  • Urvashi RV:

    • Yes - I think this will be cleaner. Let's let Jeff also give the OK on this.

  • Jeff Kern:

    • Sorry I've been so quiet. I think the modified design as Justo and Urvashi agreed yesterday sounds very good, the single VisBuffer model will work much better with the Asynchronous IO. I agree that a simpler model is probably better, and since we are accessing in memory rather than on disk the scattered accesses should really not be that large of a performance hit.

  • Justo Gonzalez:

    • Right now Flagger is using chunks of (array,field,DDI,time), with no time intervals. This determines how the Agents iterate trough each chunk (i.e. for a given chunk type you have different code). What kind of chunks you would like to have now? Do you want to have time intervals?

  • Urvashi RV:

    • A 'reasonable' time interval specified in the Vis-Iterator setup.

      • 'reasonable' == the default should relate to how much can comfortably sit in memory (nchan and nbaselines can be very different for diff datasets).

      • For autoflag algorithms, and the within-chunk flag-extender, the user can set the timeInterval. All agents will have to use the same timerange.

      • This means that agents that do not need multiple timesteps, have to become aware of "which rows are for which timesteps/scans".

      • The user should be able to choose whether or not to combine across scans (see the 'gaincal' task...)

The only advantage of the old way (iteration was done one timestep at a time) was uniform access patterns, and agents could accumulate any way they wanted - different time-ranges, or overlapping time-ranges for sliding-window filters. But, all agents then had to keep track of flags separately, no one used these options, and it was hard to debug.

  • Justo Gonzalez:

    • Now, regarding chunks characteristics, I suggest the following (inspired in gaincal):

      • By default the chunks will be broken down in (scan,field,spw)

      • The user can specify a 'combination' parameter to use something different (e.g. spw only)

      • By default the time interval will be calculated to avoid memory overload

      • The user can specify the time interval as well

  • Urvashi RV:

    • Yes - I think this would work just fine.

    • The 'combination' option is nice to have too. Suppose the user asks for: timerange='60min',spw='3~5' for each chunk, scan length is 10 minutes ( there are 6 scans in 60min ), each spw has 50 channels ( there are 3 spws )

      • If combine = 'none' then, the flagger operates in units of 10-minute scans, with 50 channels each, and makes 6x3=18 chunks instead of one.

      • If combine = 'scan' then, the flagger makes 3 chunks (combines all 6 scans, but keeps spws separate)

      • If combine = 'scan,spw', then the flagger makes only one chunk by combining all 6 scans and 3 spws.

    • Also note, multithreading across baselines is a useful thing to keep in mind....

  • Justo Gonzalez:

    • Going in this direction, I think it would be optimal if each thread operates over a group of baselines within the chunk, applying the modes/agents internally in sequential mode within the sub-chunk. For this we would need:

      • Another class, that groups the agents and has a spawn(), join() and run() methods

      • start/stop indexes in the base class of the agents to define what are the sub-chunk boundaries

  • Urvashi RV:

    • Yes ! Sounds perfect. This is also how some recent multi-threading has been implemented in the Gridders. The only thing is - there can be a max of 4 polarizations. So, multithreading will be limited to 4 cores only.

    • However, let us wait before designing too much of multithreading support. Jeff wants to try the MS-partitioning code he has been writing. At the python-level, it will split the MS into multiple reference MSs, and simply run multiple instances of the tool or task on the different pieces. This will end up using multiple cores for different field, spw, scan-chunks.Jeff is checking-in code to enable this for any task/tool, and wants to see how it works on the flagger.

  • Justo Gonzalez:

    • Regarding multi-threading: Provided that sometimes you need to combine various correlation products, Would it be better to parallelize per groups of channels?

    • Regarding Jeff's MS-partitioning:

      • Is going to be used for the pipeline clusters (i.e. split MS and send the pieces to different machines) or also within each machine?

      • What is going to be the partitioning approach in general (SPWs,scans) ?

  • Urvashi RV:

    • Regarding multi-threading: This will depend on the agent. Clipping can partition on any axis. TFCrop needs time/freq planes separately for all corrs, or together (can partition on baseline, may want to combine across spws). Any RFI excision method will need all times for all baselines together (can partition on frequency). So, they're all different.

    • Regarding Jeff's MS-partitioning: Yes. It creates a set of reference MSs, and there is a tool that splits this across machines and also within each machine. The assumption is that the data are visible to all machines.

  • Justo Gonzalez:

    • Regarding multi-threading: Ok, then it looks like we cannot have a generic data partition approach for all the agentss. Should we reconsider the first proposal (parallelize per agents)? For this we would only have to implement a mutex in the access to the 'common' flag cube, resident in the data handler. And it would still be compatible with your request to have the separated flag cubes, because each agent will write the flags to its own flag cube in addition to the common one.

    • Regarding Jeff's MS-partitioning: Ok, then if this is meant for the cluster then we can add on top of it parallelization at the task level, as we are discussing. This parallelization will allow us to use more cores in each of the machines in the cluster. In the end we would have a (cluster parallelization)x(task parallelization) factor.

  • Urvashi RV:

    • Jeff's partitioning is meant both for across nodes in a cluster, and also within each node. For a cluster with 4 nodes, and 2 cores per node.. he will split the MS into 8 pieces, and send two per node. So, if this is done properly, there will be no free cores for multi-threading.

    • But, your planned design will support adding multi-threading later if we need it (perhaps on a per-agent basis, although then you have load-balance). And yes, if we do this, then it's only the 'common' flagcube that needs a mutex. But again... this is for later, because Jeff's goal on the partitioning is to use up all cores per node.

Asynchronous I/O

Requirement 4: Switch to Jim Jacobs Asynch IO for the iteration through the iterator
  • Jeff Kern:

    • There is a description here: https://safe.nrao.edu/wiki/bin/view/Software/AsyncIOon110111 (off of CASA index). This is a framework that Jim Jacobs is working on that essentially prefetches visBuffers when we are iterating through them. Probably it would make the most sense for this task to be addressed when Justo is here in Socorro?

  • Justo Gonzalez:

    • In order to reliably profile the Flag Data Handler switching on/off async I/O, I would need the list of columns that the various flagging algorithms use, in order to enforce the Visibility Buffer to fetch them in sync mode, and thus compare with the async case.

Column Shadow Elevation Quack RFI Manualflag Clip Summary Autoflag
antenna1
antenna2
arrayId
channel
CJones
corrType
direction1
direction2
exposure
feed1
feed1_pa
feed2
feed2_pa
fieldId
flag
flagCategory
flagCube
flagRow
floatDataCube
frequency
nChannel
nCorr
nRow
observationId
phaseCenter
polFrame
processorId
scan
sigma
sigmaMat
spectralWindow
stateId
time
timeCentroid
timeInterval
uvw
uvwMat
visibility
visCube
weight
weightMat
weightSpectrum

Data mapping

Requirement 5: Introduce in the new Flag Data Handler the functionality that used to be in the Data Mapper (but with better syntax)

  • Jeff Kern:

    • Introduce in the new Flag Data Handler the functionality that used to be in the Data Mapper (but with better syntax). Thus the Data Handler class will be responsible for pulling the data out of the Vis Buffer and have methods that allow other views of this data to be returned (i.e. stokes I, abs, phase etc).

  • Urvashi RV:

    • Autoflag wants 2D time/freq planes per baseline and correlation (for one field). If you select 10 scans, and 2 spws, will dh.getView() be able to return time/freq 2D matrices with scans combined in one dimension, and spws in the other ? This is useful if you have very broad-band RFI at the edge of a spw, and you need info from the adjacent spw to be able to correctly find this RFI.

  • Justo Gonzalez:

    • And regarding the dh.getView() mode that you were describing, I think that want you really meant is to extract all the rows that correspond to a given correlation-baseline defined by a pair of (Ant_1,Ant_2). That defines a data cube (time scan,channel,polarization) for a given baseline-correlation, and then to produce a 2D matrix we combine the 4 polarizations to return any single parameter (e.g. Abs,Real,S0,S1,...). Then the interface for that function would be:

Matrix<double> * getView(uInt Antenna1,uInt Antenna2, uInt parameter)

Where parameter can be an enumeration

FlagDataHandler::ABS,REAL,IMG,S1,S2,...
  • Urvashi RV:

    • We will not always want to combine all 4 polarizations. Getting out a cube(ntime,nchan,ncorr) is sometimes better, and the datatype can be float or complex. ( For efficiency we should choose the axis order to make the time and chan elements contiguous. Most loops will be of the type :

for(polarization){
   for(channel){ 
      for(timestep){
         ...(do)...
      } 
    } 
}

or

for(polarization){
   for(timestep){ 
      for(channel){
         ...(do)...
      } 
    } 
}

Keeping polarization separate will eliminate one layer of flag-translation. In what you've described, the flag-handler will have to do things like : If the agent is flagging on Stokes I, then translate flags to RR and LL correlations.The old code did this, and there were 3 problems:

(i) It had expressions the user could set (in reverse polish notation), but this was not generic-enough for arbitrary expressions that some newer algorithms need.

(ii) It did not allow access to the complex visibilities.

(iii) If flagging was done on Stokes I, agent flags would have to be translated by the flag-handler into RR and LL. This extra layer of translation was not helpful while debugging, and agents were not allowed to flag what they wanted (for instance, detect RFI only on RR, but apply flags to RR and LL.

I think it would be cleaner to let each agent deal with these expressions separately. dh.getView() should just return a Cube of the chosen datatype ( Float (abs,phase,real,imag) or Complex ) with time/freq/corr on the three axes. This way, the flags will be of the same shape, and the flaghandler will not have to do a flag-translation (reduce the number of steps). I don't think this will spread complexity to too many agents, because it is only Clip and Autoflag that requires the flexible expressions. Each Auto-flag algorithm will require a different thing -- so it's simplest to let the agents handle it internally.

  • Justo Gonzalez:

    • Regarding dh.getView(), What would you think of using a math expression parser? There a number of MIT-LGPL C++ packages that support complex numbers and vectors, for instance muParserX. In this way dh.getView() would be a wrapper for the math parser, in charge of iterating trough baseline,freq,time applying the required expression. And for us is ok to produce either Cubes(time,channel,baseline) for all the chunk or Matrix(time,channel) for a single baseline. It does not add much complexity to the algorithm. So what about the following signatures:

Cube<Float> * getView(mathExpression)
Matrix<Float> * getView(mathExpression,antenna1,antenna2)
Cube<Complex> * getView(mathExpression)
Cube<Float> * getView(mathExpression,antenna1,antenna2)
  • Urvashi RV:

    • This looks like a nice option, but how will you handle flag-translation for an arbitrary expression ?

    • What are the axes of the Cubes and Matrices ? Some use cases I can think of (there are 4 dimensions within each chunk..... time, frequency, correlation, and baseline.)

(1) Raw complex visibilities as NTime x NChan x NPol

Cube<Complex> * getView(antenna1, antenna2)

(2) Unary operations : abs, phase, real, imag. Therefore, same shape as raw visibilities. NTime x NChan x NPol

Cube<Float> * getView(mathExpression, antenna1, antenna2)

(3) Matrix of dimensions : Ntime x Nbaseline with some way to indicate missing baselines for some timesteps. ( Note that in the MS, in the visBuffer, NRow = Ntime x Nbaseline. The number of baselines. The number of baselines can be different per timestep.

Matrix<Complex> *getView(chan, corr)

(4) Expressions that collapse 2 or more polarization. For example, RR+LL, RR-LL, RL+iLR. Shape : NTime x NChan.

Matrix<Complex> * getView(mathExpression, antenna1, antenna2)

(5) Same as (3) but with unary ops too. Eg. ABS(RR+LL)

Matrix<Float> * getView(mathExpression, antenna1, antenna2)

For (1) and (2), the flags set by the agent are the same shape as in the data. All is good.

For (4) and (5), how do you plan to translate the single Matrix of flags to multiple correlations ? You could use the rule of "flag all planes that were touched in the expression". But, that still does not allow you to look for flags from RR, but apply flags to RR and LL...

The only use case I can think of for (4) and (5) is Clipping, and the behaviour of "flag all planes touched in the expression" is a very good default.

Bottom-line : I do not see an advantage of having the nice parser for anything other than Clipping. This is mainly because of the complexity it introduces in handling the flags, which seems unnecessary since agents can generate (3),(4),(5) from (1),(2).

Can you see a clean way to deal with flags ? This is what the old code attempted to do, and this is one of the reasons for trying to make it simpler now. It wasn't used, and wasn't easy to debug when given a dataset with all kinds of varying shapes, with missing data, or data with values exactly equal to zero, which should sometimes be treated as 'flagged' and sometimes not......

See the function LightFlagger::readVisAndFlags(), and how I deal with uneven nbaselines there.

I would prefer having only (1) and (2). If it's easy to put in, the mathExpressions could be useful for some kinds of flagging, but...

  • Justo Gonzalez:

    • We can start implementing (1) and (2) right away. For (4) and (5) we actually would need to use the math parser, although another alternative would be defining:

      • Define +,-,*,i* operators for the Matrix data type

      • Define ABS,PHASE,IMAG,REAL functions for the Matrix data types in a ComplextMatrixMath static class.

      • Define LL,RL,LR,RR,XX,YY, ... functions in the dh that return the various correlation products

    • Then you could do the following: ComplextMatrixMath.ABS(dh->RL + i*dh->LR)

  • Urvashi RV:

    • This is a very nice option to have, but is not necessary for the first implementation. I'd suggest we use (1),(2) first, so that we all understand how the flags are to be handled when you try to write an expression that combines correlation axes. Once that is sorted out, we can add (4),(5). Is that ok ? I think in your design it's easy to add this later.

  • Justo Gonzalez:

    • Ok, then we can close dh.getView() for this first iteration: We will implement stand-alone versions of (1) and (2)

Flags structure

Requirement 6: Modify the flagStructure from a boolean to a bit array so each agent will have it's own bit

  • Jeff Kern:

    • Modify the flagStructure from a boolean array to a bit array so each agent will have it's own bit. Thus we can tell when we're done which agent flagged which data.

    • There are two other classes the Flag Display and Flag Examiner which should be refactored/ cleaned up and improved to make use of the bit-array above.

  • Urvashi RV:

    • The array of flags read in Light Flagger::readVisAndFlags() should be of type Int, so that we can store flags from each agent in one bit (whose position is determined by the order of agents in the list).

    • Keep the flagarray as a separate entity is to allow it to be shared across agents, with each agent filling in it's assigned "bit" position.

  • Justo Gonzalez:

    • Right now each agent is internally storing its flags in what is called 'flag lattice': An object member of RF Cube Lattice type which is essentially a cube of unsigned integers. What we can do is the following:

      • Instead of using one Flag Lattice per agent, we will have one common Flag Lattice for all the agents that will be a member of the new Flag Data Handler class, which will implement methods to store the flags in the corresponding bit position for each agent.

      • Instead of using a R Flag Word (unsigned integer) we will use an array of booleans.This is different from what has been proposed (use different bits from an unsigned integer type variable), but it has the big advantage that allows running different agents in parallel w/o running into racing conditions.

      • Flag Data Handler will implement a method to synchronize the contents of the common Flag Lattice with the visibility buffer, which will previously merge all the flags in a single one, by applying an OR condition of the separated flags contained in the array of booleans.

  • Urvashi RV:

    • A boolean array for the flags makes sense (will help for parallelization, and will not be restricted to 8 agents only.... ).

    • As of now, the Display-Agent I have is consistent with your proposed design. However, FYI - I'm trying two features, and will at a later date ask you about the best way to implement this : (a) the display agent queries the other agents for algorithm-specific information to display on the GUI. (b) the display agent's run Method returns information that indicates whether parameters have changed and whether to re-run all agents' run Method() on the current data chunk. This requires some way of either re-reading 'original' flags in the data handler, or the data handler storing "original" and "modified" versions of flags, or have the display-agent record the original flags before all other agents operate, and then read new flags once they're done. I prefer the last option because this is useful only for display and statistics, and it may be best to keep this complexity internal to the display-agent.

  • Justo Gonzalez:

    • Regarding the Display Agent questions, I think that the approach that you suggest is the better one, the Flag Data Handler will hold two Flag Cubes, the original and the modified one. Additionally we can implement methods to compare these two Flag Cubes and thus retrieve the required statistics.

  • Urvashi RV:

    • Sounds good ! The current version of the Display agent does statistics counts for original and new flags too.... we can decide later if this should remain there, or move to a separate statistics agent...

  • Justo Gonzalez:

    • We have rethink this and come up with the following idea: Provided that each agent will have its own iterator (to fulfill Requirement 2), they can also have their own Flag Cubes (original and modified) of single boolean values. This will allow the Flag Displayer/ Examiner classes to access the flags of each Flag Agent independently, and will also allow parallelization.

  • Urvashi RV:

    • Autoflag agents will usually need to operate in sequence, where the flags from one agent are used as the starting point for the next one. That's why the original suggestion was to share a flag cube (through a single data-handler that goes into all agents in succession).

  • Justo Gonzalez:

    • Your case can also be covered by adding a 'getVisBuffer' function in the base class

agent1.flag()
agent1.flush()
visBuffer = agent1.getVisBuffer()
agent2.setVisBuffer(visBuffer)
agent2.flag()
...
  • Urvashi RV:

    • This means each agent has to write to the vb.flag Cube. We need to avoid this if we operate in ' writeflags = False ' mode, where new flags are only calculated and counted-up, but not written to the vis Buffer. The simplified mechanism implemented in light flagger shares one copy of the flag cube (no merging-code required). However, this assumes that all agents will want the same data in the same shapes, etc......

  • Justo Gonzalez:

    • You are right, and in principle we can add a method to set directly the flag cube in the flag agents, so that you get the computed flag cube from one agent, and pass it on to the next one without having to write the flags. But if you prefer we can implement the following:

      • Set of flag cubes (one per agent) kept in the FlagDataHandler

      • Common 'accumulated' flag cube kept in the FlagDataHandler

      • The flag() method of the agents will set the flags in both, the 'accumulated' and 'specific' Flag Cube.

In this way you can have the consolidated flag cube (common to all the agents), and the specific flag cubes for each agent without requiring any merge-split code or writing the flags.

  • Urvashi RV:

    • Yes - I think this will be cleaner.

Base class

Requirement 7: The agents should all be derived from a common base class with a run method generic for all kinds of data (Ms,CalTables,SingleDish)

  • Jeff Kern:

    • The agents should all be derived from a common base class, this class should have the following capabilities:

      • Virtual runMethod should have signature run(const Vis Buffer*, const Data Handler*, flag Structure)

      • Should have methods for storing parameters and writing flag_cmd commands when modified interactively (Urvashi to clarify design)

  • Urvashi RV:

    • LF Base: Base class for the agents. Contains only parameter controls, and a runMethod() function. All agents get references to the data and flag arrays from Light Flagger, and modify the flags according to the algorithm. The data and flag arrays are 3D with pol, chan, time*baseline as the 3 axes.

    • Flagcmd and parameter-memory : Some way for the interactive choices to be recorded so that it can be re-run. The usage mode of an autoflag algorithm is to first run it interactively (in do-not-write-flags-to-MS mode), fiddle with parameters until you're happy, keep recording these parameters (currently Records), and then re-run the program in non-display mode, but with all these flag-commands built in.

    • To supply meta-data for MS's, we had suggested sending in a " const Vis Buffer " to the flagmethod.runMethod(), but this is too restrictive, because it won't work for single-dish or caltables, and not all flag-methods require it. Instead, perhaps we could make only the data-handler be passed into runMethod(), but depending on the type of input data, there is access to different meta-data (for example, the data-handler for MS's will contain a const reference to the visbuffer.... ). Algorithms that don't need meta-data, just won't read any of it from the data-handler, and will be able to run on any type of input data.

    • There is one more feature that could be useful in the future : the ability to call any flagging agent on a visiter::chunk (or visbuffer) - from something other than the flagger. We do not need this right now, but it should be kept in mind (again, not designed-out).

    • For example, the use of the flag-display is mainly to try different parameters, see what the autoflag algorithm does (without writing flags to the MS). The displays are however restricted to only what I program into it. But, if there were a way for say "plotms" to call a flagger function per visIter "next()", and then proceed with plotting whatever the user chooses to see, that could be useful."plotms" could even get a new 'tab' for flagging controls, and could set up autoflag/shadow parameters there. Does this look feasible to you in the planned design ? Could a version of the data-handler be constructed from a visbuffer (or visiter-chunk), and flagmethod[i].runMethod(dh) be called on it from anywhere ?

  • Justo Gonzalez:

    • It looks like the approach of different iterators/chunks/visBuffers brings more disadvantages (memory, issues with consolidated flagging) that advantages (flag() method optimized for each Agent). So if you prefer we can implement the following:

      • Common iterator kept in the Flag Data Handler

      • Common visBuffer kept in the Flag Data Handler

      • prevChunk/nextChunk methods defined in the Flag Data Handler

      • Set of flag cubes (one per agent) kept in the Flag Data Handler

      • setFlags() method defined in the Flag Data Handler will set the 'accumulated' flag cube in the Measurement Set object.

      • The flag() method of the agents will set the flags in both, the 'accumulated' and 'specific' Flag Cube.

      • The flag() method of the agents will be implemented for a generic chunk.

      • The run(dh) method of the agents will simply retrieve the visBuffer from the dh and pass it on to the flag() method.

In this way you can have the consolidated flag cube (common to all the agents), and the specific flag cubes for each agent without requiring any merge-split code or writing the flags.

  • Urvashi RV:

    • Yes - I think this will be cleaner.

Requirement 8: Port Quack, Manual, Clipping, and Shadow agents to the above framework

  • Jeff Kern:

    • Port Quack, Manual, Clipping, and Shadow agents to the above framework

  • Sandra Castro:

    • The current flag data design already implements a classes diagram for the Flag Agents whose base class is RFA Flag Cube Base. However some of the flag modes have dedicated agents (rfi is implemented exclusively RFA Time Freq Crop), whereas others share one common agent (manualflag, shadow, elevation, quack and clip share RFA Selector):

      ClassesDiagram.png
    • Therefore we understand that the idea here is to have dedicated Agents for manualflag, shadow, elevation, quack, clip modes as well derived from LF Base instead of RFA Flag Cube Base and with an implementation ported to use the above mentioned Flag Data Handler

Flags extension

Requirement 9: Flag Extension capabilities (in Time, Polarization, Freq ...)

  • Jeff Kern:

    • The common base class should have flag extension capabilities in Time, Polarization, Freq, (whatever else we already have)

    • Extend the above with a fairly trivial "Extension" agent which just extends existing flags.

  • Urvashi RV:

    • Flag Extension in LF Base : This is to allow each agent to extend it's own flags independent of the rest. This makes sense only after the flag structures have been migrated to a array of booleans. Note that this could also be done in a stand-alone Flag Extender (similar to LF Extend Flags) because we can distinguish between flags for different agents, and extend them separately.

  • Justo Gonzalez:

    • Regarding Jeff's comment we understand the following:

      • Extension in time: Iterate in chunks over time, if there is at least one row flagged in the chunk, then all the other rows in that chunk have to be flagged as well.
      • Extension in polarization: Iterate in chunks over polarization, if there is at least one row flagged in the chunk, then all the other rows in that chunk have to be flagged as well.
      • … and so on for Observation, Array, Scan, Field, Time, SPW, Channel and Polarization.
    • However, regarding Urvashi's comment, we have to take into account that even though the flags are going to be handled separately by each Flag Agent, they still will be single boolean values in the MS, therefore we cannot extend the flags for a particular Flag Agent / mode (assuming that flag extension is something done independently from flagging in itself)
  • Urvashi RV:

    • Yes. But, if there's only one iterator in the data-handler, and if one of the autoflag agents wants to accumulate 100 timesteps in one chunk, then the flag extender should know which rows belong to which timestep....:

    • Just to make sure... by 'extend in time', do you mean " If one baseline/chan/pol is flagged for this timestep, flag all baselines/chans/pols in this timestep " ? I have been defining "flag extensions in time" differently -- For a channel, if 50% of the timerange in the chunk is flagged, flag the whole timerange in that chunk - for that channel ). We'll need to get the nomenclature sorted out :).

    • FYI : http://www.aoc.nrao.edu/~rurvashi/TFCrop/TFCrop.html This is what I've released for early testing (of the autoflag algorithms and display system). It should give you a clearer picture of what my comments are biased by. Also, a few weeks ago I had sent a link to a test dataset with RFI in it. This new task will run on it.... if you want to play with it.

  • Justo Gonzalez:

    • Maybe here the idea is to migrate LFExtendFlags to the new framework (as with the other agents), is this correct?

  • Urvashi RV:

    • Yes ! I will port LF Time Freq Crop, LF Display Flags, LF Extend Flags to the new framework once it's ready. LF Extend Flags already has extensions in time, freq, and correlation. I will be adding extensions across baselines.Jeff and I talked just now. Flag extensions are of two kinds:

      • (1) 1D extensions within a chunk -- time (within a timerange), frequency (within one spw), correlation, baseline. This will be covered by LFExtendFlags. Extensions along multiple dimensions will depend on the order in which the user specifies the extensions.....

      • (2) Extensions across chunks -- field, array, data-desc-id, time (all times..... ). This has to be done in two passes. In the first pass, the ExtendFlag agent can count all existing flags, and at the end, generate flagcmds for cross-chunk extensions. A second pass with manualflag will apply these extensions. For example : "For each SPWs, if 80% of a channel is flagged for the calibrator (RR,LL correlations), flag all correlations (RR,RL,LR,LL) of that channel for the target source.

  • Justo Gonzalez:

    • Regarding cross-chunk flag extension:

      • Implement a flag counter that computers flagging percentages for all the dimensions (e.g for scan1 35% of the rows are flagged, and for SPW10, that is present in sans1,2,3 60% of the rows are flagged).

      • With this information we generate flag commands if some thresholds are reached, to flag the rest of the rows (e.g. 60% of the rows with SPW10 are flagged, so we flag all the rows with SPW10, which some of them are inscan1, other in scan2 and others in scan3).

  • Urvashi RV:

    • About accumulating flag counts, would something like RFA Flag Examiner, or LF Display Flags::Accumulate Stats(VisBuffer &vb) suffice, or were you thinking of something more generic ?

  • Justo Gonzalez:

    • Yes I think that the most useful cases (accumulate-extend in channel and correlation) are already implemented in RFA Flag Examiner, so we can port it to the new framework and combine it with manualflag to extend the flags across chunks.

  • Urvashi RV:

    • Yes - it is also in LF Extend Flags (in the simplified way where the agent has direct access to flags for all correlations).

  • Justo Gonzalez:

    • If LF Extend Flags covers within-chunk flag extension, and can be generalized to cover cross-chunk flag extension as in RFA Flag Examiner, then apart from porting, is there any new functionality to introduce?

  • Urvashi RV:

    • The ported LF Extend Flags will handle within-chunk extensions, and will count flags across chunks. At the end of the MS, it will generate a list of flagcmds for cross-chunk extensions, that can be run as manualflags in a second pass through the data. (Note : LF Extend Flags does not extend across baselines yet.... I will add this soon).

    • => New functionality --> Move counters into LF Extend Flags, make sure that counts are done on all required axes, and create flag-cmds using these final counts.

    • Note also that these counts are useful for the display agent as well..... and should be re-usable. So, perhaps we have a LF Examine Flag agent, from which LF Display Flag, and LF Extend Flag can inherit....

New design proposal

Final requirements

Data handling:
  • Encapsulate all the I/O functionality in one single class that reads, selects and writes the data only once, removing the MS, Vis Iter, and Vis Buffer related objects from the Flagger tool.

  • This class will be generic for MSs, Single Dish and Calibration Tables.

Data selection:
  • Merge flagcmd into flagdata2

  • Ensure that all the data selection ranges coming from the different flag cmds are put together in one single 'union' data selection range.

Data iteration:
  • Use one common Vis Iter and Vis Buffer (resident in the Flag Data Handler.) for all the agents.

  • Use chunks as follows:

    • By default use chunks based on fixed values of (array,field,scan,DDI(SPW))

    • The user can specify a 'combination' parameter to determine the chunks as in gaincal.

  • Don't use timesteps, instead use a time interval:

    • By default the time interval will be calculated to avoid memory overload.

    • The user can specify a custom time interval as well.

  • Include stand-alone nextChunk()/prevChunk() methods in the Flag Data Handler to navigate trough the iterators.

  • Switch to Jim Jacobs Asynch IO for the iteration through the iterator

Data mapping:
  • Implement the following signatures of a getView function in the Flag Data Handler in order to introduce the functionality that used to be in the Data Mapper:

    • Cube of NTime x NChan x Npol complex visibilities for a given baseline:

      • Cube<Complex> * getView(antenna1, antenna2)

    • Cube of NTime x NChan x Npol float values for a given baseline as the result for applying a unary operation (abs,phase,real,imag)

      • Cube<Float> * getView(antenna1, antenna2)

Flags structure:
  • Per Vis Buffer use a set of flag cubes a follows:

    • Original Flag Cube resident in the Flag Data Handler common for all the agents

    • Modified Flag Cube resident in the Flag Data Handler common for all the agents

    • Specific Flag Cube resident in each Flag Agent including the flags created only by itself.

  • Flag Data Handler will have methods to:

    • Get the original Flag Cube

    • Get the modified Flag Cube

    • Set the modified Flag Cube in the Measurement Set object

  • Flag Agents will have methods to:

    • Retrieve the specific Flag Cube

    • Set the flags in the common and specific Flag Cubes

Base class:
  • The agents should all be derived from a common base class, with a run(Flag Data Handler *dh) method, generic for all kinds of data (MS,Cal Tables and Single Dish).

  • Port Quack, Manual, Clipping, and Shadow agents to be derived from this new base class.

Flags extension:
  • Move flag counters from RFA Flag Examiner. into LF Extend Flags, making sure that counts are done on all required axes (i.e. include baselines)

  • Create flag-commands suitable for manual flanging, based on the the above mentioned flag counters, using thresholds as in TFCrop.

  • Add an additional step to apply the manual flag commands mentioned above

Detailed design

The following is a detailed design diagram of our proposal, including pseudo-code. The idea is to be as much specific as possible, because there will have interactions between at least 3 developers, coding at the same time. In addition we include a table describing the methods for each of the new main classes (Flag Data Handler and Flag Agent Base) so that we can have parallel developing:

Detailed design diagram

FlagData-Refactoring-Detailed-Design.png

Flag Agent Base class methods and signature

Function name Type Input parameters Output parameters Description
setDataHandler non-virtual ref to Flag Data Handler object none Sets Data Handler
setIterator non-virtual ref to Visibility Iterator none Sets Visibility Iterator
setBuffer non-virtual ref to Visibility Buffer none Sets Visibility Buffer
setDataSelection non-virtual map[string,string] none Sets the Agent's data selection
setParameters virtual map[string,string] none Sets the Agent's parameters
getFlagCommand virtual none string Retrieves the flag commands consistent with the Agent's parameters and data selection
run non-virtual ref to Flag Data Handler object none Iterates through the selected MS flagging the data according to the Agent's algorithm
newChunk non-virtual none Bool

Performs one incremental iteration of the Visibility Buffer and

resets Flag Cube to match the dimensions of the new chunk

prevChunk non-virtual none Bool

Performs one decremental iteration of the Visibility Buffer and

resets Flag Cube to match the dimensions of the new chunk

initializeFlagCubes non-virtual none none Sets the original and modified flag cubes according to the dimensions of the current chunk
flush non-virtual none none

Acquires lock from Data Handler, retrieves Flag Cube corresponding

to current chunk and merges it (logical OR) with the newly computed flags

getCurrentFlagCube non-virtual none ref Matrix[Bool] Get Flag Cube from current Chunk
getComputedFlagCube non-virtual none ref Matrix[Bool] Get Flag Cube computed by this Agent
flag virtual none none Performs flagging over current chunk
mergeFlagCubes non-virtual (ref Matrix[Bool], ref Matrix[Bool]) Matrix[Bool] Merges two flag cubes performing a logical OR
compareFlagCubes non-virtual (ref Matrix[Bool], ref Matrix[Bool]) TDB Compares current and computed flag cubes in order to derive flagging stats
fromSelectionToSortOrder non-virtual map[string,string] Array[uInt] Determines the optimum chunks order for a given selection range
fromSelectionToChannelList non-virtual map[string,string] Matrix[uInt] Creates the SPW/Channel selection matrix from a given selection range

Flag Data Handler class methods and signature

Function name Type Input parameters Output parameters Description
setMS non-virtual ref to Measurement Set object none Creates a Measurement Set object attached to a given file
addSelections non-virtual map[string,string] none Adds a selection to the selection lists
mergeSelections non-virtual none none Merges the various selection applying an union
applySelections non-virtual none ref to Measurement Set Creates a Measurement Set selection from the original Measurement Set applying the merged selection
getIterator non-virtual none ref to Visibility Iterator Sets and iterator for a given chunk selection
getView non-virtual ref to Visibility Buffer Matrix[Double] Retrieves a custom view for the visibilities of a given Visibility Buff
adquireLock non-virtual none none Acquires the lock to write in the Measurement Set object
releaseLock non-virtual none non Releases the lock to write in the Measurement Set object
flushMS non-virtual none none Flushes the Measurement Set into the file system

Sequence diagram

FlagData-Refactoring-Sequence-Diargam.png

Traceability matrix

Requirements-High Level Design

The following traceability table shows which class implements each requirement, and the corresponding tickets and developers in charge.

Requirement Scope Related Classes Ticket Assignee
1 Data handling Flag Data Handler CAS-3245 jagonzal
2 Data selection Flag Data Task CAS-3244 scastro
3 Data iteration - chunks characteristics Flag Data Handler
Flag Data Task
CAS-3245 jagonzal
4 Data iteration - asynchronous I/O Flag Data Handler CAS-3245 jagonzal
5 Data mapping Flag Data Handler CAS-3245 jagonzal
6 Flags structure Flag Data Handler
Flag Agent Base
   
7 Base class - running methods Flag Agent Base    
8 Base class - porting Flag Agent Quack
Flag Agent Manual
Flag Agent Clipping
Flag Agent Shadow
   
9 Flag extension LF Extend Flags
RFA Flag Examiner
   

Requirements-Tests

-- SandraCastro - 2011-06-09 -- JustoGonzalez - 2011-06-15
Topic attachments
I Attachment Action Size Date Who Comment
ClassesDiagram.pngpng ClassesDiagram.png manage 5 K 2011-06-17 - 11:48 JustoGonzalez Doxygen old classes diagram
FlagData-Refactoring-Detailed-Design.pngpng FlagData-Refactoring-Detailed-Design.png manage 196 K 2011-06-28 - 05:34 JustoGonzalez Detailed design diagram for our proposal
FlagData-Refactoring-Sequence-Diargam.pngpng FlagData-Refactoring-Sequence-Diargam.png manage 64 K 2011-06-28 - 11:36 JustoGonzalez Sequence diagram for our proposal
Flagger.pngpng Flagger.png manage 17 K 2011-06-16 - 11:33 JustoGonzalez Data handling related classes encapsulated in Flagger tool
FlaggingClassSchema.pngpng FlaggingClassSchema.png manage 197 K 2011-06-16 - 11:04 JustoGonzalez Old design diagram
Topic revision: r29 - 2011-07-21, SandraCastro
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback