“Grid coumputing”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(1位用户的2个中间修订版本未显示)
第5行: 第5行:
 
To join this framework, you need:
 
To join this framework, you need:
  
1. Decide that you want to share your box, which should be linux, not very slow.
+
1. Decide that you want to share your box, which should be linux, not very shabby (2.4GHz, 4G mem at least).
  
 
2. Let the administrator install the SGE ware.
 
2. Let the administrator install the SGE ware.
第16行: 第16行:
  
  
More instructions will be posted in near future. You can contact Fanhu Bie/Liuchao/DongWang to get more information.
+
More instructions will be posted in near future. You can contact Mengyuan Zhao / Dong Wang to get more information.

2019年1月2日 (三) 07:25的最后版本

Grid computing is an efficient distribution computing model and is particularly suitable for large scale tasks which can be split into small jobs. For example, in ASR, the statistic accumulation of EM can be split into small accumulation jobs and then the partial statistics are merged to update models. In addition, decoding tasks can be split into small decoding jobs and distributed into a cluter of computing nodes. This is also useful on multi-session scenarios, for example large scale ASR online service, for which an ASR request can be handled by a dedicated node allocated by the grid system automatically according to load balance of nodes in the grid.

We are constructing and maintaining a grid computing system based on sun grid engine (SGE). Currently we have just two nodes for the ASR team. If you are interested, you are welcome to add your nodes and use the shared computing resource.

To join this framework, you need:

1. Decide that you want to share your box, which should be linux, not very shabby (2.4GHz, 4G mem at least).

2. Let the administrator install the SGE ware.

3. Apply an NIS account.

4. Manage your tasks by splitting them into small jobs appropriately.

5. Kick off your experiments.


More instructions will be posted in near future. You can contact Mengyuan Zhao / Dong Wang to get more information.