Start MATLAB® and connect to a COMSOL Multiphysics server
|
||
<target> -h
|
|
-3drend ogl | sw
|
|
-alloc {native} | scalable
|
|
-applicationsroot <path>
|
|
-autosave {on} | off
|
|
BLAS library to use.3 mkl and aocl are supported for Intel and AMD processors. armpl and openblas are supported for ARM processors.
|
|
-blaspath <path>
|
|
-c <path>
|
|
Cluster partitioning method. Choose from mesh ordering (mo), nested dissection (nd), or weighted nested dissection (wnd).
|
|
Cluster storage format. The single format does I/O only from the root node, while the shared format does I/O using distributed I/O operations. The shared format requires that all nodes have access to the same storage area and the same temporary storage area.
|
|
-comsolinifile <path>
|
|
-configuration <path>
|
Path to directory for storing the state for the GUI between sessions and for performing different caching tasks. The configuration directory is by default a subdirectory to the preference directory. However, the default location of the configuration directory is not affected if you use the -prefsdir option. When running in batch or cluster mode, add @process.id to get a unique identifier to the path (for example, -configuration /tmp/comsol_@process.id).
|
Path to a workspace directory for storing internal workspace information. The workspace directory is by default a subdirectory to the preference directory. However, the default location of the workspace directory is not affected by the -prefsdir option. The workspace directory is cleared when COMSOL is launched. When running in batch or cluster mode, add @process.id to get a unique identifier to the path (for example, -data /tmp/comsol_@process.id).
|
|
-docroot <path>
|
|
-keeplicenses on | {off}
|
|
-np <no. of cores>
|
|
-numafirst <numa number>
|
|
-numasets <no. of sets>
|
|
-prefsdir <path>
|
|
-recoverydir <path>
|
|
-tmpdir <path>
|
|
-v, -version
|
|
You can also specify the number of cores and sockets and the use of the scalable allocators as preferences on the Computing>Multicore page in the Preferences window. To specify those numbers manually, select the Number of cores and Number of sockets checkboxes to enter a number in the associated text fields. By default, all cores are used and the number of sockets are set automatically. If you lower the number of cores, it is good practice to also lower the number of sockets. The preference option for the scalable allocator is called Optimized for multicore. If you want to choose another memory allocator than the default setting, select the Memory allocator checkbox and the choose Native or Optimized for multicore. To control the scalability assembling mode, which can be useful even when running on a single node, select the Memory scalability optimization for assembling checkbox. You can then select Off (the default, for no scalability mode), Matrix for activating scalability mode only for matrix assembling, Vector for activating scalability mode only for vector assembling, or All for activating scalability mode for all cases. You can also control these options with a command-line argument -memoptassem.
|
Use the AMD Optimizing CPU Libraries (included with installation for Intel and AMD processors). (The obsolete BLAS option blis uses the AMD Optimizing CPU Libraries.)
|
|
Use a BLAS library specified using the option -blaspath or the environment variable COMSOL_BLAS_PATH.
|
-appargnames <names>
|
|
-appargvalues <values>
|
|
-appargsfile <filename>
|
|
-appargvarlist <names>
|
|
-appargfilelist <filenames>
|
-open <file>
|
|
-run <file>
|
Ask for login information. info means that only missing information is asked for. force resets the password. never requires that the login information is available. auto automatically creates a new username and password.
|
|
-passwd reset | nostore
|
|
-port <port>
|
|
-portfile <path>
|
Specify that COMSOL Multiphysics writes its server port to the given <path> when it has started listening.
|
-user <user>
|
Always make sure that untrusted users cannot access the COMSOL login information. Protect the file .comsol/v63/login.properties in your home directory. This is important when using the COMSOL Multiphysics client–server configuration. Alternatively, start the COMSOL Multiphysics server with the -passwd nostore option, and clear the Remember username and password checkbox when connecting to the server. This ensures that your login information is not stored on file.
|
Documentation server port when started from comsol.exe.
|
||
Documentation server port when started from comsoldoc.exe.
|
-open <file>
|
|
-port <port>
|
|
-server <server name>
|
-alivetime <seconds>
|
|
-batchlog <filename>
|
|
-checklicense <filename>
|
|
-classpathadd <classpath>
|
|
Run as client. If -mode is batch (the default), then the server that the client connects to should also be launched with -mode batch. Unless running in -mode desktop, the server must also be started with -mode batch.
|
|
-dev <filename>
|
|
-error {on} | off
|
|
-external <process tag>
|
The external process target <process tag> for an operation.
|
-host <hostname>
|
Connect to host <hostname>.
|
-inputfile <filename>
|
Run a Model MPH-file or class file. Also supports a model version location (dbmodel:/// URI) to run a model version stored in a Model Manager database. See Running COMSOL Batch with Models in Databases for more details.
|
-job <job tag>
|
|
-jobfile <filename>
|
Specify a text file using the following format:
<inputfile0> <outputfile0> <inputfile1> <outputfile1> <inputfile2> <outputfile2> If the filename <inputfile0> or <outputfile0> contains spaces, surround the path by double quotation marks ("...").
|
Run a method call with the given tag. The file in <filename> contains the method call.
|
|
-mode {batch} | desktop
|
Ignore Batch and Cluster Computing settings. See Ignoring Batch and Cluster Computing Settings above. If -mode is batch (the default), then the server that the client connects to should also be launched with -mode batch.
|
Do not compute the model. This option is useful if you, for example, just want to run -clearsolution or -clearmesh on a model that already includes a solution or mesh and then save it, without a solution or mesh, without computing the model first.
|
|
Sets the location on the file system where output files produced by a batch run will be stored. If not used, the files will be stored in the same directory as the output model, or if the output model is stored in the database, in the standard batch directory (specified by the preference setting Computing > Cluster > Batch directory).
|
|
-outputfile <filename>
|
Save a Model MPH-file using the given filename. If output is not given, the input file is overwritten with the output. Also supports a model version location (dbmodel:/// URI) to save a model version to a Model Manager database. See COMSOL Batch under a Floating Network License for more details. Note that a class file as input cannot currently be combined with a model version location as output.
|
-paramfile <filename>
|
|
-pindex <parameter indices>
|
|
-plist <parameter value>
|
|
-pname <parameter name>
|
|
-port <port number>
|
Connect to port <port number.
|
Add extra command-line arguments using -prodargs followed by the arguments last in the call to COMSOL batch.
|
|
Compact the history before saving. This argument can be useful, together with -clearmesh and -clearsolution, to reduce the size of the saved file.
|
|
-stoptime <time to stop>
|
|
-study <study tag>
|
|
If the model uses an inner parametric solver (in a nested parametric sweep), the Job Configurations node is ignored by the batch job. In such cases, you need to switch to an outer parametric solver.
|
-classpathadd <classpath>
|
|
-jdkroot <path>
|
|
-icon <path>
|
|
-outputdir <path>
|
|
Specify where to store the runtime when running the application. The default option is the platform’s default location. The ask option asks the user for the location of the runtime when running the application. The <path> option provides a location where the runtime should be unpacked and stored. Only specify a path when compiling for a single platform.
|
|
-runtimewindows <path>
|
|
-runtimelinux <path>
|
|
-runtimemacOS <path>
|
|
-runtimetype {download} | embed
|
|
-splash <path>
|
comsol -nn <nn> batch
|
|
comsol -nn <nn> mphserver
|
|
comsol -nn <nn>
|
-mpiarg <arg>
|
|
-mpibootstrapexec <path>
|
|
Select network fabrics where fabric1 is one of <shm | ofi>, and fabric2 is <ofi>. This option is not supported for ARM processors.
|
|
-mpihosts <hostnames>
|
|
Set the MPI I/O mode. Setting this property to off means that COMSOL does not search for a distributed file system. Setting this property to gpfs, lustre, or panfs makes COMSOL assume it is running on the selected file system.
|
|
-mpiofiroot <path>
|
|
-mpipath <file>
|
|
-mpirmk <pbs>
|
|
-mpiroot <path>
|
|
-nn <no. of nodes>
|
|
-nnhost <no. of nodes>
|
|
Scalapack library to use. For the path option, the environment variable COMSOL_SCALAPACK_PATH must be set.
The argument mkl is not supported for ARM processors.
|
|
-scalapackpath <path>
|
•
|
You can set the remote node access mechanism that is used for connecting using the switch -mpibootstrap. The valid options are ssh, rsh, fork, slurm, ll, lsf, sge, and jmi. This is important if the cluster only supports a different remote node access mechanism than ssh because ssh is the default protocol used.
|
•
|
Use the switch -mpibootstrapexec to set the path to the remote node access mechanism such as /usr/bin/ssh.
|
•
|
The option -mpidebug sets the output level from MPI. The default is level 4.
|
•
|
You can control the network fabrics used for communication with the option -mpifabrics fabric1:fabric2, where fabric1 is one of shm or ofi, and fabric2 is ofi. The option -mpiofiprovider controls the network provider used by the OFI (Open Fabrics Interfaces). The options are the following: mlx, tcp, psm2, psm3, sockets, efa, rxm, or verbs. Use these options if you are having trouble with the default fabrics used. This option is not supported for ARM processors.
|
•
|
Use -mpienablex to enable Xlib forwarding. Xlib forwarding is off by default.
|
Run parakill command
|
|
Run help command
|
|
•
|
If you are using a PBS-based scheduler, add -clustersimple or -mpirmk pbs to the command line in order for Intel MPI to interpret the environment correctly.
|
•
|
If you are using an LSF-based scheduler, -clustersimple or -mpirmk lsf to the command line in order for Intel MPI to interpret the environment correctly.
|
•
|
The Intel MPI library automatically tries to detect the best option for communication and uses InfiniBand if it detects it. To verify that COMSOL is using InfiniBand, check the output from the startup of COMSOL with -mpidebug 10; it should not mention TCP transfer mode.
|
•
|
If you have problems with correct selection of your network card, add the -mpiofiprovider <network type> option to the command line.
|
•
|
The -mpiofiprovider sockets option selects general TCP network communication and can be used for finding out if there is some problem in the Open Fabrics interface stack of the machine.
|
•
|
Use the -mpi mpich2 option to switch from the default Intel MPI library to the MPICH2 library.
|
Start the server with graphics libraries. This enables plotting on the server. Available only when running comsolmphserver matlab [<options>].
|
|
-host <hostname>
|
|
-mlroot <path>
|
|
Start in directory path <path>.
|
|
-port <hostname>
|
•
|