Cloudflare Powers Hyper-local AI Inference With NVIDIA Accelerated Computing

SAN FRANCISCO, Sept 29 (Bernama-BUSINESS WIRE) — Cloudflare, Inc. (NYSE: NET), the leading connectivity cloud company, today announced its global network will deploy NVIDIA GPUs at the edge combined with NVIDIA Ethernet switches, putting AI inference compute power close to users around the globe. It will also feature NVIDIA’s full stack inference software —including NVIDIA TensorRT-LLM and NVIDIA Triton Inference server — to further accelerate performance of AI applications, including large language models.

Starting today, all Cloudflare customers can access local compute power to deliver AI applications and services using fast and more compliant infrastructure. With this announcement, organizations will be able to run AI workloads at scale, and pay for compute power as needed, for the first time through Cloudflare.

http://mrem.bernama.com/viewsm.php?idm=47130

administrator

Related Articles