Skip to main content

Nvidia Gpus

L40s GPU Node

The L40s GPU node is configured as a worker node within the cluster.

Node Information

  • The GPU node is fully integrated into the Kubernetes cluster
  • GPU resources are available for scheduling container workloads
  • Supports NVIDIA GPU workloads with proper resource allocation

Example Container Workload

Here's an example deployment that utilizes the GPU node:

yaml
1234567891011121314
apiVersion: v1
kind: Pod
metadata:
name: gpu-test
spec:
containers:
- name: cuda-container
image: nvidia/cuda:12.0.0-base-ubuntu22.04
command: ["nvidia-smi"]
resources:
limits:
nvidia.com/gpu: 1
nodeSelector:
nvidia.com/gpu.product: NVIDIA-L40S

To deploy and test:

bash
12
kubectl apply -f gpu-test.yaml
kubectl logs gpu-test
tip

Make sure to include GPU resource requests in your pod specifications to ensure proper scheduling on the L40s node.