This website uses cookies. By clicking Accept, you consent to the use of cookies. Click Here to learn more about how we use cookies.

Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- AMD Community
- Communities
- Developers
- Devgurus Discussion
- Re: bsc ai thesis / gpu computing

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

highborn

Journeyman III

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

10-17-2018
09:43 AM

since this is a first for me please allow me to introduce myself.

My name is Levente, and i'm currently a student at the University of Szeged in Hungary working for my BSC diploma.

TL;DR;

Can i GPU compute an AI workload in Vulkan?

Hello.

My diploma work will be about AI. More specifically i want to to be the genetic algorithm. With that said, i'm very much a beginner in AI. (and everything at the bsc level obviously.)

So when i took the course for my thesis (if that's the right word) i didn't really have any specific idea, so the prof/lecturer pitched me one. I like but there's a "slight" problem with it for me.

He said, that'd be very good, if we could replicate the data found here:

https://blog.openai.com/evolution-strategies/

I really like the idea of it but there's a massive hardware limitation for me. I can't just throw hundreds but even 32 cores and days of compute time at the problem. I'm an avid gamer and my rig is a slouch by no means, but obviously it's not a server farm. It is a dilemma for me and i was sitting on for a while and came up with something tho I've no idea if it even can be done, worth the time, and effort.

My solution to this would be to try to GPU compute it on the VULKAN API. My gpu is a good compute card as far as i know and if this kind of workload could be parallelized on it i'd be amazing. Note that their original code was written in python. What i know about it is that's it's easy to code, but terribly slow. I was thinking about rewritting it in C++ and try to GPU compute the workload.

My question would be that is it an option to do so, or do i need to think about an other topic.

Also if yes, any kind of material that helps me learn vulkan would be appreciated.

P.S.:

I've posted this also on redit but got no "good enough" answer so i came here, maybe you guyz could. Well also red team community forum but after some clicking on the forums realised that this is a better place probably

My computer:

Ryzen R7 1700x

Vega64 (strix)

16gigs of ram.

Solved! Go to Solution.

Reply

1 Solution

Accepted Solutions

nobodygamer

Adept II

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

10-17-2018
01:23 PM

Hi Levente,

In general Evolutionary Algorithms are completely different than Neural Networks.

The main aspects that make NNs a good application for GPU programming is that:

1. They are using simple maths (addition, subtraction and multiplication), and

2. They are relying on matrix operations (something that GPUs excel at).

EAs though despite that they are parallelisable (each individual can be evaluated and mutated in parallel) they have an aspect that has difficulties

when you want to run them on GPU. This is the fact that an individual is not a always a simple mathematical equation which can be mapped to GPU operations.

Also you might need to consider that there will be a need to translate the representation of what an individual is to a structure that you can apply GPU computations.

On the aspect of what language to use yes C++ is the go to option. I would suggest to look into Python based approaches also, I know there is PyCUDA and Numba (NumPy on GPU). Also from a fast search I found out that there is Vulkan for Python!

The most important part of this project for me is to model your EA's individuals and evolutionary operators in a manner that can be run on a GPU.

Best,

Christos

2 Replies

nobodygamer

Adept II

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

10-17-2018
01:23 PM

Hi Levente,

In general Evolutionary Algorithms are completely different than Neural Networks.

The main aspects that make NNs a good application for GPU programming is that:

1. They are using simple maths (addition, subtraction and multiplication), and

2. They are relying on matrix operations (something that GPUs excel at).

EAs though despite that they are parallelisable (each individual can be evaluated and mutated in parallel) they have an aspect that has difficulties

when you want to run them on GPU. This is the fact that an individual is not a always a simple mathematical equation which can be mapped to GPU operations.

Also you might need to consider that there will be a need to translate the representation of what an individual is to a structure that you can apply GPU computations.

On the aspect of what language to use yes C++ is the go to option. I would suggest to look into Python based approaches also, I know there is PyCUDA and Numba (NumPy on GPU). Also from a fast search I found out that there is Vulkan for Python!

The most important part of this project for me is to model your EA's individuals and evolutionary operators in a manner that can be run on a GPU.

Best,

Christos

highborn

Journeyman III

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

10-17-2018
05:17 PM

Re: bsc ai thesis / gpu computing

thank you very much for your reply. It was very informative!

Reply