Coming May 2025
Get Early Access
aVenture is in Alpha: aVenture recently launched early public access to our research product. It's intended to illustrate capabilities and gather feedback from users. While in Alpha, you should expect the research data to be limited and may not yet meet our exacting standards. We've made the decision to temporarily present this information to showcase the product's potential, but you should not yet rely upon it for your investment decisions.
aVenture is in Alpha: aVenture recently launched early public access to our research product. It's intended to illustrate capabilities and gather feedback from users. While in Alpha, you should expect the research data to be limited and may not yet meet our exacting standards. We've made the decision to temporarily present this information to showcase the product's potential, but you should not yet rely upon it for your investment decisions.
© aVenture Investment Company, 2025. All rights reserved.
44 Tehama St, San Francisco, CA 94105
aVenture Investment Company ("aVenture") is an independent venture capital research platform providing detailed analysis and data on startups, venture capital investments, and key industry individuals.
While we strive to provide valuable insights with objectivity and professional diligence, we cannot guarantee the accuracy of the information provided on our platform. Before making any investment decisions, you should verify the accuracy of all pertinent details for your decision.
aVenture does not offer investment advisory services and is not registered as an investment adviser. The data provided by aVenture does not constitute recommendations or advice, whether by methodology or a statement written by a staff member of aVenture.
Links to external websites do not imply endorsement or affiliation with aVenture. References or links to providers offering the ability to invest in a primary or secondary transaction in a company are for convenience purposes only. They are not solicitations or offers to buy or sell an investment. Remember that past performance does not guarantee future results, and venture capital and private assets should be a contributory part of a diversified portfolio.
From TechCrunch
By Paul Sawers
April 24, 2024
A French startup has raised a hefty seed investment to “rearchitect compute infrastructure” for developers wanting to build and train AI applications more efficiently.
FlexAI, as the company is called, has been operating in stealth since October 2023, but the Paris-based company is formally launching Wednesday with €28.5 million ($30 million) in funding, while teasing its first product: an on-demand cloud service for AI training.
This is a chunky bit of change for a seed round, which normally means substantial founder pedigree — and that is the case here. FlexAI co-founder and CEO Brijesh Tripathi was previously a senior design engineer at GPU giant and now AI darling Nvidia, before landing in various senior engineering and architecting roles at Apple; Tesla (working directly under Elon Musk); Zoox (before Amazon acquired the autonomous driving startup); and, most recently, Tripathi was VP of Intel’s AI and supercompute platform offshoot, AXG.
FlexAI co-founder and CTO Dali Kilani has an impressive CV, too, serving in various technical roles at companies, including Nvidia and Zynga, while most recently filling the CTO role at French startup Lifen, which develops digital infrastructure for the healthcare industry.
The seed round was led by Alpha Intelligence Capital (AIC), Elaia Partners and Heartcore Capital, with participation from Frst Capital, Motier Ventures, Partech and InstaDeep CEO Karim Beguir.
To grasp what Tripathi and Kilani are attempting with FlexAI, it’s first worth understanding what developers and AI practitioners are up against in terms of accessing “compute”; this refers to the processing power, infrastructure and resources needed to carry out computational tasks such as processing data, running algorithms, and executing machine learning models.
“Using any infrastructure in the AI space is complex; it’s not for the faint of heart, and it’s not for the inexperienced,” Tripathi told TechCrunch. “It requires you to know too much about how to build infrastructure before you can use it.”
By contrast, the public cloud ecosystem that has evolved these past couple of decades serves as a fine example of how an industry has emerged from developers’ need to build applications without worrying too much about the back end.
“If you are a small developer and want to write an application, you don’t need to know where it’s being run, or what the back end is — you just need to spin up an EC2 [Amazon Elastic Compute cloud] instance and you’re done,” Tripathi said. “You can’t do that with AI compute today.”
In the AI sphere, developers must figure out how many GPUs (graphics processing units) they need to interconnect over what type of network, managed through a software ecosystem that they are entirely responsible for setting up. If a GPU or network fails, or if anything in that chain goes awry, the onus is on the developer to sort it.
“We want to bring AI compute infrastructure to the same level of simplicity that the general purpose cloud has gotten to — after 20 years, yes, but there is no reason why AI compute can’t see the same benefits,” Tripathi said. “We want to get to a point where running AI workloads doesn’t require you to become data center experts.”
With the current iteration of its product going through its paces with a handful of beta customers, FlexAI will launch its first commercial product later this year. It’s basically a cloud service that connects developers to “virtual heterogeneous compute,” meaning that they can run their workloads and deploy AI models across multiple architectures, paying on a usage basis rather than renting GPUs on a dollars-per-hour basis.
GPUs are vital cogs in AI development, serving to train and run large language models (LLMs), for example. Nvidia is one of the preeminent players in the GPU space, and one of the main beneficiaries of the AI revolution sparked by OpenAI and ChatGPT. In the 12 months since OpenAI launched an API for ChatGPT in March 2023, allowing developers to bake ChatGPT functionality into their own apps, Nvidia’s shares ballooned from around $500 billion to more than $2 trillion.
LLMs are now pouring out of the technology industry, with demand for GPUs skyrocketing in tandem. But GPUs are expensive to run, and renting them for smaller jobs or ad hoc use cases doesn’t always make sense and can be prohibitively expensive; this is why AWS has been dabbling with time-limited rentals for smaller AI projects. But renting is still renting, which is why FlexAI wants to abstract away the underlying complexities and let customers access AI compute on an as-needed basis.
FlexAI’s starting point is that most developers don’t really care for the most part whose GPUs or chips they use, whether it’s Nvidia, AMD, Intel, Graphcore or Cerebras. Their main concern is being able to develop their AI and build applications within their budgetary constraints.
This is where FlexAI’s concept of “universal AI compute” comes in, where FlexAI takes the user’s requirements and allocates it to whatever architecture makes sense for that particular job, taking care of all the necessary conversions across the different platforms, whether that’s Intel’s Gaudi infrastructure, AMD’s ROCm or Nvidia’s CUDA.
“What this means is that the developer is only focused on building, training and using models,” Tripathi said. “We take care of everything underneath. The failures, recovery, reliability, are all managed by us, and you pay for what you use.”
In many ways, FlexAI is setting out to fast-track for AI what has already been happening in the cloud, which means more than replicating the pay-per-usage model: It means the ability to go “multicloud” by leaning on the different benefits of different GPU and chip infrastructures.
FlexAI will channel a customer’s specific workload depending on what their priorities are. If a company has limited budget for training and fine-tuning their AI models, they can set that within the FlexAI platform to get the maximum amount of compute bang for their buck. This might mean going through Intel for cheaper (but slower) compute, but if a developer has a small run that requires the fastest possible output, then it can be channeled through Nvidia instead.
Under the hood, FlexAI is basically an “aggregator of demand,” renting the hardware itself through traditional means and, using its “strong connections” with the folks at Intel and AMD, secures preferential prices that it spreads across its own customer base. This doesn’t necessarily mean side-stepping the kingpin Nvidia, but it possibly does mean that to a large extent — with Intel and AMD fighting for GPU scraps left in Nvidia’s wake — there is a huge incentive for them to play ball with aggregators such as FlexAI.
“If I can make it work for customers and bring tens to hundreds of customers onto their infrastructure, they [Intel and AMD] will be very happy,” Tripathi said.
This sits in contrast to similar GPU cloud players in the space such as the well-funded CoreWeave and Lambda Labs, which are focused squarely on Nvidia hardware.
“I want to get AI compute to the point where the current general purpose cloud computing is,” Tripathi noted. “You can’t do multicloud on AI. You have to select specific hardware, number of GPUs, infrastructure, connectivity, and then maintain it yourself. Today, that’s the only way to actually get AI compute.”
When asked who the exact launch partners are, Tripathi said that he was unable to name all of them due to a lack of “formal commitments” from some of them.
“Intel is a strong partner, they are definitely providing infrastructure, and AMD is a partner that’s providing infrastructure,” he said. “But there is a second layer of partnerships that are happening with Nvidia and a couple of other silicon companies that we are not yet ready to share, but they are all in the mix and MOUs [memorandums of understanding] are being signed right now.”
Tripathi is more than equipped to deal with the challenges ahead, having worked in some of the world’s largest tech companies.
“I know enough about GPUs; I used to build GPUs,” Tripathi said of his seven-year stint at Nvidia, ending in 2007 when he jumped ship for Apple as it was launching the first iPhone. “At Apple, I became focused on solving real customer problems. I was there when Apple started building their first SoCs [system on chips] for phones.”
Tripathi also spent two years at Tesla from 2016 to 2018 as hardware engineering lead, where he ended up working directly under Elon Musk for his last six months after two people above him abruptly left the company.
“At Tesla, the thing that I learned and I’m taking into my startup is that there are no constraints other than science and physics,” he said. “How things are done today is not how it should be or needs to be done. You should go after what the right thing to do is from first principles, and to do that, remove every black box.”
Tripathi was involved in Tesla’s transition to making its own chips, a move that has since been emulated by GM and Hyundai, among other automakers.
“One of the first things I did at Tesla was to figure out how many microcontrollers there are in a car, and to do that, we literally had to sort through a bunch of those big black boxes with metal shielding and casing around it, to find these really tiny small microcontrollers in there,” Tripathi said. “And we ended up putting that on a table, laid it out and said, ‘Elon, there are 50 microcontrollers in a car. And we pay sometimes 1,000 times margins on them because they are shielded and protected in a big metal casing.’ And he’s like, ‘let’s go make our own.’ And we did that.”
Looking further into the future, FlexAI has aspirations to build out its own infrastructure, too, including data centers. This, Tripathi said, will be funded by debt financing, building on a recent trend that has seen rivals in the space, including CoreWeave and Lambda Labs, use Nvidia chips as collateral to secure loans — rather than giving more equity away.
“Bankers now know how to use GPUs as collaterals,” Tripathi said. “Why give away equity? Until we become a real compute provider, our company’s value is not enough to get us the hundreds of millions of dollars needed to invest in building data centers. If we did only equity, we disappear when the money is gone. But if we actually bank it on GPUs as collateral, they can take the GPUs away and put it in some other data center.”
After Shopify bought his last startup, Birk Jernström wants to help developers build one-person unicorns
Sam Altman and “his tech CEO friends” have a betting pool on the year we will see the first one-person billion-dollar company. The idea of a single person reaching a billion-dollar valuation for a startup would have been unthinkable without AI. But single-person, AI-first businesses have been sprouting all over the tech industry and Birk Jernström, CEO of Polar, a “monetization platform to empower one-person unicorns,” is standing by to help them get there. Polar hopes to stand out from other
Jun 18, 2025
A comprehensive list of 2025 tech layoffs
The tech layoff wave is still kicking in 2025. Last year saw more than 150,000 job cuts across 549 companies, according to independent layoffs tracker Layoffs.fyi. So far this year, more than 22,000 workers have been the victim of reductions across the tech industry, with a staggering 16,084 cuts taking place in February alone. We’re tracking layoffs in the tech industry in 2025 so you can see the trajectory of the cutbacks and understand the impact on innovation across all types of companies.
Jun 17, 2025
Unlock purpose-driven growth at TechCrunch All Stage, and get $210 off for 6 more days
T-minus 6 days until TechCrunch All Stage ticket prices rise. From now until June 22 at 11:59 p.m. PT, founders save $210 and investors save $200 on passes. Are you ready to push your startup to the next level? Or are you an investor looking to back the next big breakthrough? Join TC All Stage on July 15 at SoWa Power Station in Boston for the founder summit built for traction and breakout growth. Give your startup a competitive edge. Secure your pass now and save up to $210. Why attend TC All
Jun 17, 2025