blog_posts: 48d6be52aa
This data as json
id | createdDate | title | link | postExcerpt | featuredImageUrl | hash | contributors | modifiedDate | displayDate |
---|---|---|---|---|---|---|---|---|---|
blog-posts#34-9401 | 2019-08-20 21:14:25 | Parallelizing across multiple CPU/GPUs to speed up deep learning inference at the edge | https://aws.amazon.com/blogs/machine-learning/parallelizing-across-multiple-cpu-gpus-to-speed-up-deep-learning-inference-at-the-edge/ | AWS customers often choose to run machine learning (ML) inferences at the edge to minimize latency. In many of these situations, ML predictions must be run on a large number of inputs independently. For example, running an object detection model on each frame of a video. In these cases, parallelizing ML inferences across all available CPU/GPUs [...] | https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2019/08/12/parallelizing-2-300x123.gif | 48d6be52aa | Angela Wang | 2024-02-05 20:05:57 | 20 Aug 2019 |
Links from other tables
- 5 rows from blog_post_hash in blog_post_tags