Why AI Projects Get Stuck in Experimentation and How to Get Them Out
Steve Nouri of GenAI Works and Grant Spradlin of Composable discuss why AI projects get stuck in experimentation, which areas of LLM integration are most painful, and how enterprise teams are solving these challenges.
Presented by:
Steve Nouri
CEO & Founder of Generative AI
Steve Nouri is the CEO and co-founder of Generative AI Works, the largest AI community with a mission to empower society to discover, learn, and grow in an AI-enabled world. Steve is also the founder of AI4Diversity, a non-profit initiative that aims to bring diverse communities together to learn about and promote responsible AI.
Grant Spradlin
VP of Product at Composable
Grant Spradlin is the VP of Product at Composable. He is a dynamic leader with a passion for disruptive technology. Having 20+ years expertise in IT architecture, application integration, and content services, Grant helps organizations create enterprise solutions that align strategy, architecture, and assets with business goals.
A recent survey conducted by Composable found that just 30% of senior tech professionals are prepared to run LLM projects for enterprise solutions in the next two years yet 70% plan to initiate two or more LLM projects per year and almost 94% expect to run two or more models.
While enterprise teams are thinking about multiple models and inference providers, there are some common challenges holding their AI projects back. The survey revealed that 95% of respondents have identified at least one reason preventing their projects from going from experimentation to production.
Watch this webinar to learn about:
- The top 3 reasons preventing AI projects from moving forward
- Why operationalizing LLM-powered tasks has been challenging
- How enterprise architects and CTOs can prevent AI model vendor lock-in
- How to get AI projects out of experimentation and into production