You are here

Cedar

Cedar System Status

Post date System Status: Update Notes
2018-01-03 - 12:01 PST Online

Cedar storage problem causing i/o errors

Jan 3 12:54MT: the problem has been resolved.

Jan 3 07:30MT: One of the Cedar storage metadata servers seems to be running out of resources. Users may see the following issues

  • unable to login (connection closed).
  • jobs or sessions crashing with read errors

The vendor has been contacted.

2018-01-03 - 11:54 PST Conditions

Cedar storage problem causing i/o errors

One of the Cedar storage metadata servers seems to be running out of resources. Users may see the following issues

  • unable to login (connection closed).
  • jobs or sessions crashing with read errors

The vendor has been contacted.

2017-12-13 - 13:59 PST Online

Scheduling issue for jobs requesting less than 4 GPU's

Dec.13, 2017: The problem has been fixed.

Dec.11, 2017: The problem is still evident - we're working on it.

Nov.30, 2017: Jobs submitted through sbatch should now see the correct GPU(s). Jobs submitted through salloc (interactive jobs) still have a problem. 

Nov.30, 2017: Jobs requesting less than a full node worth of GPU (i.e. less than 4 GPUs) may be assigned the same GPU by Slurm since the latest upgrade. 

2017-12-12 - 08:11 PST Conditions

Scheduling issue for jobs requesting less than 4 GPU's

Dec.13, 2017: The problem has been fixed.

Dec.11, 2017: The problem is still evident - we're working on it.

Nov.30, 2017: Jobs submitted through sbatch should now see the correct GPU(s). Jobs submitted through salloc (interactive jobs) still have a problem. 

Nov.30, 2017: Jobs requesting less than a full node worth of GPU (i.e. less than 4 GPUs) may be assigned the same GPU by Slurm since the latest upgrade. 

2017-12-04 - 14:41 PST Online