OpenNebula GPU AI Deep learning


I’m looking into developing a private cloud using OpenNebula, so that I can develop AI and learn more about deep learning and machine learning, using GPU to train & test dataset (similar services offer by cloud providers like, AWS).

I can gain access to a single PC with three GPU card.

  1. Is this possible?
  2. Have anyone done anything similar?
  3. Main reason why developing private cloud is cost.


From v4.14 GPU is supported .

It allows to allocate PCI GPU resources exclusively to one VM at a time.

Not tried it but might do it soon …