Nvidia Gpus
L40s GPU Node
The L40s GPU node is configured as a worker node within the cluster.
Node Information
- The GPU node is fully integrated into the Kubernetes cluster
- GPU resources are available for scheduling container workloads
- Supports NVIDIA GPU workloads with proper resource allocation
Example Container Workload
Here's an example deployment that utilizes the GPU node:
yaml
1234567891011121314
apiVersion: v1kind: Podmetadata:name: gpu-testspec:containers:- name: cuda-containerimage: nvidia/cuda:12.0.0-base-ubuntu22.04command: ["nvidia-smi"]resources:limits:nvidia.com/gpu: 1nodeSelector:nvidia.com/gpu.product: NVIDIA-L40S
To deploy and test:
bash
12
kubectl apply -f gpu-test.yamlkubectl logs gpu-test
tip
Make sure to include GPU resource requests in your pod specifications to ensure proper scheduling on the L40s node.