EEburyraised$700M·Growth|SStarcloudraised$200M·Growth|FFunraised$72M·Series A|SStandard Intelligenceraised$75M·Series A|NNetomiraised$110M·Series C|PParallel Web Systemsraised$100M·Series B|RRogoraised$160M·Series D|HHightouchraised$150M·Series D|TTrue Anomalyraised$650M·Series D|SSaronicraised$1.75B·Series D|AAvocaraised$125M·Series B|SSiFiveraised$400M·Series G|SSidewinder Therapeuticsraised$137M·Series B|HHermeusraised$350M·Series C|WWHOOPraised$575M·Series G|
GroqGQ
GQ

Groq

The fastest inference on the planet

Visit
AISeries DEst. 2016200–500 employeesMountain View

Groq builds the Language Processing Unit (LPU), a custom chip delivering inference speeds 10x faster than GPUs. Their cloud API serves open-source models like Llama and Mixtral at speeds that feel instantaneous — enabling real-time AI applications previously impossible on GPU infrastructure. Groq is the infrastructure layer for latency-sensitive AI products.

Total Raised$640M
BlackRockCisco InvestmentsSamsung Catalyst

Founders

J
Jonathan Ross
D
Douglas Wightman