These settings are saved to The Preferences Dialog Box in the Multicore and Cluster Computing section.
|
After making these settings, click the Save as Default () button on the Settings window toolbar to save the current directory settings as the default preference.
|
•
|
When General is selected, and you have started a multiprocessor daemon (MPD) on the computer, click to select the MPD is running check box.
|
•
|
The entry in the Host file field specifies the host file used for the job. If left empty, MPD looks for a file mpd.hosts in the Linux home directory.
|
•
|
Select which bootstrap server should be used by MPI using the Bootstrap server setting.
|
•
|
If your cluster is Linux and it requires that an SSH (secure shell) or RSH (remote shell) is installed in an uncommon directory, use the Rsh field to set the RSH communication protocol.
|
•
|
If you must provide extra arguments to MPI, use the Additional MPI arguments field.
|
•
|
Enter the Number of nodes (physical nodes) to use (default is 1 node).
|
•
|
Enter the Number of processes on host. The default is 1.
|
•
|
If you want to include scheduler arguments, add them to the Additional scheduler arguments field (for example, for mpiexec).
|
•
|
If you must provide extra arguments to MPI, use the Additional MPI arguments field.
|
•
|
Enter the Number of nodes (physical nodes) to use (the default is 1 node).
|
•
|
Select a Node granularity: Node (the default), Socket, or Core. Node allocates one process on each host, Socket allocates one process on each socket, and Core allocates one process on each core.
|
•
|
The Exclusive nodes check box is selected by default. Click to clear if you want to run on nodes shared by other users.
|
•
|
The entry in the Scheduler field is the IP address of the enterprise adapter of the head node or the DNS name of the head node. The default is localhost.
|
•
|
Set the names of Requested nodes. The job scheduler only allocates jobs on the nodes listed by you.
|
•
|
Enter the Node group. The job scheduler only allocates jobs on the nodes belonging to the group.
|
•
|
Enter the minimum required Cores per node. The default is 0. The job scheduler only allocates jobs to nodes with at least as many cores as set.
|
•
|
Enter the minimum required Memory per node (MB). The default is 0. The job scheduler only allocates jobs to nodes with at least as much memory as set.
|
•
|
•
|
The entry in the User field is the user account that COMSOL Multiphysics uses for submitting the job. You provide the password in a separate command window that opens at execution time with the possibility to save the credentials.
|
•
|
Select a Priority — Highest, Above normal, Normal (the default), Below normal, or Lowest — for the scheduled job.
|
•
|
If you want to include scheduler arguments, add them to the Additional scheduler arguments field (for example, for mpiexec).
|
•
|
If you must provide extra arguments to MPI, use the Additional MPI arguments field.
|
•
|
Enter the Number of nodes (physical nodes) to use (the default is 1 node).
|
•
|
The Exclusive nodes check box is selected by default. Click to clear if you want to run on nodes shared by other users.
|
•
|
The entry in the Scheduler field is the IP address of the enterprise adapter of the head node or the DNS name of the head node. The default is localhost.
|
•
|
Set the names of Requested nodes.
|
•
|
•
|
The entry in the User field is the user account that the COMSOL software uses for submitting the job. You provide the password in a separate command window that opens at execution time with the possibility to save the credentials.
|
•
|
Select a Priority — Highest, Above normal, Normal (the default), Below normal, or Lowest — for the scheduled job.
|
•
|
If you want to include scheduler arguments, add them to the Additional scheduler arguments field (for example, for mpiexec).
|
•
|
Select the Bootstrap server that should be used by MPI.
|
•
|
If your cluster is Linux and it requires that an SSH (secure shell) or an RSH (remote shell) is installed in an uncommon directory, use the Rsh field to set the RSH communication protocol.
|
•
|
If you must provide extra arguments to MPI, use the Additional MPI arguments field.
|
•
|
Select a Slot granularity — Host, Slot, or Manual — to specify if COMSOL Multiphysics should parallelize on the physical Host level or on the OGS/GE-allocated Slot level. For Host and Slot, specify the Number of slots to allocate. The Manual setting can be used to control the granularity more. In this case set the number of computational nodes to use in the Number of nodes. For Slot and Manual, the number of processes on each node is set in the Number of processes on host field; usually this is 1.
|
•
|
Enter the Queue name to set the name of the Sun Grid Engine.
|
•
|
Enter the Number of processes on host to set the number of processes on each host.
|
•
|
•
|
If you want to include scheduler arguments, add them to the Additional scheduler arguments field (for example, for mpiexec).
|
•
|
If you must provide extra arguments to MPI, use the Additional MPI arguments field.
|
•
|
Enter the Number of nodes (physical nodes) to use (the default is 1 node).
|
•
|
Enter the Queue name to set the name of the Sun Grid Engine.
|
•
|
The Exclusive nodes check box is selected by default. Click to clear if you want to run on nodes shared by other users.
|
•
|
The entry in the Scheduler field is the IP address of the enterprise adapter of the head node or the DNS name of the head node. The default is localhost.
|
•
|
Set the names of Requested nodes.
|
•
|
Enter the minimum required Memory per node (MB). The default is 0. The job scheduler only allocates jobs to nodes with at least as much memory as set.
|
•
|
Enter the Runtime (minutes) before the job is canceled. The default is Infinite; that is, the job is never canceled.
|
•
|
The entry in the User field is the user account that the COMSOL software uses for submitting the job. You provide the password in a separate command window that opens at execution time with the possibility to save the credentials.
|
•
|
Micromixer — Cluster Version: Application Library path COMSOL_Multiphysics/Tutorials/micromixer_cluster.
|