SR

SRI RAO

Investor

United States

Overview

Work Experience

  • Adjunct Professor in Entrepreneurship and Innovation

    2024 - Current

    Launching a multidisciplinary agentic-AI and human-machine interactions lab studying commercial applications, policies, and impact of machine intelligence on the workforce.

  • Investor, Board Member

    2024

    TieSet builds orchestration and management solutions that make it possible for AI applications to run everywhere, across diverse environments and devices in a privacy-centric and data efficient way.

  • Investor, Board Advisor

    2023

    Digitizing institutional investments in emerging managers.

  • Investor

    2023

    Distributed SaaS services that make it simple to deploy complex storage configurations in single, multi, and hybrid cloud environments built by industry-leading experts in AI/ML and storage optimization.

  • Investor

    2023

    SirenOpt makes an AI based real-time sensing and software platform that improves manufacturing yield and performance of thin film products.

  • Investor

    2023

    Dream3D couples intuitive interfaces with machine intelligence to make it easy for anyone to create beautiful computer graphics.

  • Investor

    2022

    NuMind allows software engineers, data scientists and non-experts alike to easily create state-of-the-art machine learning models powered by LLMs to process text automatically.

  • Investor

    2022

    Cache is a fintech startup based in San Francisco, and backed by First Round, Quiet, and some of the savviest angel investors in tech and finance. We are a brokerage designed specifically for large stock positions. If a significant portion of your portfolio is in individual stocks, we have products to reduce your risk, generate passive income and get efficient liquidity.

  • Investor

    2022

    Polymath makes it radically simple to add safety-critical navigation to industrial vehicles.

  • Investor

    2022

    The AI revolution won't happen until the cost of inference drops by 10X. While training is limited by compute, inference is limited by bandwidth. Volantis is building a new chip with 10X more bandwidth- resulting in >10X cheaper inference.

Relevant Websites