Changes between Version 15 and Version 16 of ComputeStartDefault


Ignore:
Timestamp:
2010-11-09T10:04:35+01:00 (14 years ago)
Author:
george
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • ComputeStartDefault

    v15 v16  
    11= MCF Compute Manager =
    22
    3 Our computational infrastructure is organised as a ”cloud” and implemented using the [http://www.gridgainsystems.com/ GridGain 2.1.1] development platform. The whole computational logic is located at one cloud node, which is the MCF Compute Manager node. The rest of the computational base is a standard [http://www.gridgainsystems.com/ GridGain] software deployed in a local network, cluster or server. MOLGENIS data management modules are also deployed on the MCF Compute Manager node. The topology of our ”cloud” is shown
     3Our computational infrastructure is organised as a ”cloud” and implemented using the [http://www.gridgainsystems.com/ GridGain 2.1.1] development platform. The whole computational logic is located at one cloud node, which is the MCF Compute Manager node. The rest of the computational base is a standard gridgain software deployed in a local network, cluster or server. MOLGENIS data management modules are also deployed on the MCF Compute Manager node. The topology of our ”cloud” is shown
    44
    55[[Image(back-end.gif, 750)]]
     
    99* Resource Manager, which starts and stops Worker nodes on the cluster.
    1010
    11 The Job Manager logic is rather straightforward and can be easily adjusted for use on a specific cluster or server. After a job is received by Job Manager, it is registered in the database and passed to the Worker nodes for execution. There are two different kinds of Worker nodes in the system. These are Resident Workers and Extra Workers. Basically, these nodes are the same standard [http://www.gridgainsystems.com/ GridGain] nodes and differ only by name or a cloud segment.
     11The Job Manager logic is rather straightforward and can be easily adjusted for use on a specific cluster or server. After a job is received by Job Manager, it is registered in the database and passed to the Worker nodes for execution. There are two different kinds of Worker nodes in the system. These are Resident Workers and Extra Workers. Basically, these nodes are the same standard  gridgain nodes and differ only by name or a cloud segment.
    1212Why do we need two different kinds of nodes in the system, if these nodes have the same functionallity? A workflow operation is an execution of a bioinformatics analysis tool, which is invoked from a command line. A usual output is files and a standard command-line output or/and error. The difference between two kinds of Worker nodes is in a way analysis tools are invoked from them. Resident Worker starts a job by sumbitting a shell script to the cluster job scheduler. In contrast to Resident Worker, Extra Worker directly invokes an analysis tool. In this way, the cluster scheduler can be circumvented.
    1313