Khosla Leads $17.5 Million Investment in Fastino’s AI Model Training Using Budget-Friendly Gaming GPUs
Fastino trains AI models on cheap gaming GPUs while raising $17.5M.
Tech giants like to boast about trillion-parameter AI models that require massive and expensive GPU clusters, but Fastino is taking a different approach.
The Palo Alto-based startup says it has invented a new kind of AI model architecture that’s intentionally small and task-specific. The models are so small they’re trained with low-end gaming GPUs worth less than $100,000 in total, Fastino says.
Spiking Interest
The method is attracting attention and has seen Fastino secure $17.5 million in seed funding led by Khosla Ventures, famously OpenAI’s first venture investor. This brings the startup’s total funding to nearly $25 million. It raised $7 million last November in a pre-seed round led by Microsoft’s VC arm M12 and Insight Partners.
“Our models are faster, more accurate, and cost a fraction to train while outperforming flagship models on specific tasks,” says Ash Lewis, Fastino’s CEO and co-founder.
Fastino has built a suite of small models that it sells to enterprise customers. Each model focuses on a specific task a company might need, like redacting sensitive data or summarizing corporate documents.
Small but Advanced
Fastino isn’t disclosing early metrics or users yet, but says its performance is wowing early users. For example, because they’re so small, its models can deliver an entire response in a single token.
It’s still a bit early to tell if Fastino’s approach will catch on. The enterprise AI space is crowded, with companies like Cohere and Databricks also touting AI that excels at certain tasks. And the enterprise-focused SATA model makers, including Anthropic and Mistral, also offer small models. It’s also no secret that the future of generative AI for enterprise is likely in smaller, more focused language models.
Time may tell, but an early vote of confidence from Khosla certainly doesn’t hurt. For now, Fastino says it’s focused on building a cutting-edge AI team. It’s targeting researchers at top AI labs who aren’t obsessed with building the biggest model or beating the benchmarks.

