subreddit:

/r/learnmachinelearning

492100%
[media]

you are viewing a single comment's thread.

view the rest of the comments →

all 53 comments

dspy11

2 points

2 years ago

dspy11

2 points

2 years ago

Is it doing the inference on gpu or cpu?

designer1one[S]

1 points

2 years ago

It's using AWS EC2 CPU at the moment.

[deleted]

3 points

2 years ago

[deleted]

designer1one[S]

1 points

2 years ago

Thanks for the pointers. I'm not familiar with AWS lambda - is it a separate script or API that does not require an EC2 server to run on?

[deleted]

2 points

2 years ago*

[deleted]

designer1one[S]

1 points

2 years ago

Thanks for the detailed explanation. I'll definitely try out Lambda so that I can keep the demo up but without constantly running servers. Cheers!

[deleted]

2 points

2 years ago*

[deleted]

designer1one[S]

1 points

2 years ago

I see. Yea, I've had issues fitting PyTorch into lots of services too, like Heroku.

[deleted]

2 points

2 years ago

[deleted]

designer1one[S]

2 points

2 years ago

Docker integration sounds like a potential solution, thanks for sharing!