4.2. Install CMAQv5.4 on ParallelCluster (optional)#
Create the ParallelCluster with the base Ubutu OS using c7g.large head node and c7g.16xlarge as the compute node.
Learn how to install CMAQ software and underlying libraries, copy input data, and run CMAQ.
Notice
Skip this tutorial if you do not want to learn how to install the CMAQv5.4 software and proceed to the post-processing and QA instructions. Note, you may wish to build the underlying libraries and CMAQ and code if you wish to create a ParallelCluster using a different family of compute nodes, such as the c6gn.16xlarge compute nodes AMD Graviton.
- 4.2.1. Configure Parallel Cluster
- 4.2.2. Create the hpc7g.16xlarge pcluster
- 4.2.3. Install CMAQ sofware and libraries on ParallelCluster version 3.6
- Login to updated cluster
- Change shell to use .tcsh
- Check to see the tcsh shell is default
- Reload the environment modules
- Check to make sure elastic network adapter (ENA) is enabled
- Verify the gcc compiler version is greater than 8.0
- Change directories to install and build the libraries and CMAQ
- Build netcdf C and netcdf F libraries - these scripts work for the gcc 8+ compiler
- A .cshrc script with LD_LIBRARY_PATH was copied to your home directory, enter the shell again and check environment variables that were set using
- If the .cshrc was not created use the following command to create it
- Execute the shell to activate it
- Verify that you see the following setting
- Build I/O API library
- Build CMAQ
- 4.2.4. Install netCDF libraries that use HDF5 and support nc4 compressed files
- 4.2.5. Install gh following these instructions
- 4.2.6. Use gh authentication
- 4.2.7. Run CMAQ using hpc7g.16xlarge compute nodes
- Verify that you have an updated set of run scripts from the pcluster-cmaq repo
- Verify that the input data is imported to /fsx from the S3 Bucket
- Preloading the files
- Create a /fsx/data and /fsx/data/output directory
- Link the data to what is being used in the run scriptso
- Run the 12US1 Domain on 32 pes
- Check the status in the queue
- check on the status of the cluster using CloudWatch
- check the timings while the job is still running using the following command
- When the job has completed, use tail to view the timing from the log file.
- Submit a request for a 64 pe job ( 2 x 32 pe) using 2 nodes
- Check on the status in the queue
- Check the status of the run
- Check whether the scheduler thinks there are cpus or vcpus
- When multiple jobs are submitted to the queue they will be dispatched to different compute nodes.
- When the job has completed, use tail to view the timing from the log file.
- Submit a job to run on 96 cores, 3x32 nodes
- Verify that it is running on 3 nodes
- Check the log for how quickly the job is running
- Submit a job to run on 128 cores, 4x32 nodes
- Verify that it is running on 4 nodes
- Check the log for how quickly the job is running
- 4.2.8. Run CMAQ using hpc7g.8xlarge compute nodes
- Verify that you have an updated set of run scripts from the pcluster-cmaq repo
- Run the 12US1 Domain on 32 pes on hpc7g.8xlarge
- When the job has completed, use tail to view the timing from the log file.
- Submit a request for a 64 pe job ( 2 x 32 pe) using 2 nodes on hpc7g.8xlarge
- Check on the status in the queue
- Check the status of the run
- Check whether the scheduler thinks there are cpus or vcpus
- When multiple jobs are submitted to the queue they will be dispatched to different compute nodes.
- When the job has completed, use tail to view the timing from the log file.
- Submit a job to run on 96 cores, 3x32 nodes on hpc7g.8xlarge
- Verify that it is running on 3 nodes
- Check the log for how quickly the job is running
- 4.2.9. Install Input Data on ParallelCluster
- Verify AWS CLI is available obtain data from AWS S3 Bucket
- Verify you can run the aws command
- Copy Input Data from S3 Bucket to lustre filesystem
- Use the S3 script to copy the CONUS input data from the CMAS s3 bucket
- For ParallelCluster: Import the Input data from a public S3 Bucket (optional)
- Convert the *.nc4 compressed netCDF4 files to netCDF classic (nc3) files