The INF1 instance family in Amazon EC2 is designed for high-performance, low-cost machine learning inference. These instances, generally available, are powered by AWS Inferentia chips, custom-built by AWS for ML inference, along with 2nd Generation Intel Xeon Scalable processors. INF1 instances offer up to 100 Gbps of networking bandwidth. They do not have instance storage, relying on EBS for storage needs.