Compute > GPU Instance > Overview

GPU instance is a virtual server, in which the Graphics Processing Unit (GPU) is additionally configured to the instance. It is broadly applied from scientific discovery to deep learning.

You may enable GPU by selecting 1 or 2 GPUs.

Features

  • AI Training
  • AI Inference
  • High Performance Computing

GPU Specifications

Ultimate Performance for Deep Learning

Tesla V100 for NVLink
GPU Architecture NVIDIA Volta
NVIDIA Tensor Cores 640
NVIDIA CUDA Cores 5120
Double-Precision Performance 7.8 TFLOPS
Single-Precision Performance 15.7 TFLOPS
Tensor Performance 125 TFLOPS
GPU Memory 32GB
Memory Bandwidth 900GB/sec
ECC Yes
Interconnect Bandwidth 300GB/sec
System Interface NVIDIA NVLink
Form Factor SXM2
Max Power Comsumption 300 WATTS
Thermal Solution Passive
Compute APIs CUDA, DirectCompute, OpenCL ™ ,OpenACC
TOP