JP

Jendrik Poloczek

Investor at Greenfield Capital

Berlin, Berlin

Overview

Work Experience

  • Principal Research Engineer & Investor

    2022 - Current

    ► Leading infrastructure investments ► Greenfield Capital is a crypto fund that makes long-term bets on early developer teams building towards an open, decentralized and more robust architecture of tomorrow’s web. Portfolio: NYM, Sovryn, Vega, Celo, NEAR, DapperLabs, 1inch, Arweave, Multis, Stakewise, IDLE.finance, among others

  • Head of Engineering

    2020 - 2022

    ► Due diligence as being part of the investment team on companies, protocol architecture, and smart contracts. ► Consulting on security and professional asset custody (about $500M AUM at the time) ► Started two blockchain networks Vega ($5M round) & Nym ($6.5M round) in a community effort, and maintained these validators ► Supported portfolio companies with running off-chain infrastructure such as oracles/keepers

  • Senior Software Engineer

    2020 - 2020

    ► Coinbase is a digital currency exchange headquartered in San Francisco, California. It operates exchanges of Bitcoin, Ethereum, Bitcoin Cash, and Litecoin, as well as other digital assets with fiat currencies in 32 countries. Coinbase has served over 10 million customers and facilitated the exchange of more than $50 billion worth of digital currency. ► Designed data model and sub-pipeline to parse multiple semantic layers of Ethereum on-chain data for analytics pipeline efficiently. Build PoCs in Go to support data teams to retrieve cutting edge data for reports. ► Advised data platform team on the integration of Kafka and Kafka Streams as Coinbase's new central message hub for e.g. event-sourced streaming analytics.

  • Head of Engineering

    2019 - 2020

    ► Served more than 25 hedge funds, from US to Korea, with refined on-chain data of various blockchains such as exchange flows from whale investors or miners to exchanges and back. But also blockchain network metrics such as miner hashrate, block rewards, to aggregated metrics such as SOPR (Spent Output Profit Ratio) or UTXO age bands. Serving in total more than 900 different API endpoints. ► Architected low-latency generic blockchain data pipelines for 5 different blockchains, capable for batch and stream processing, and that is fault-tolerant, highly available and composable, with the help of AWS EC2, AWS S3, AWS Athena (later Presto), Apache Kafka, Kafka Connect and Apache Kafka Streams and RocksDB state stores. Some parts are open source now. ► Led and supported an agile team of 5 engineers in building fullstack MVPs in weekly cycles to test product market fit. We shipped more than 20 data products including interaction and visualization. Here, I also took care of product development, strategy and customer analytics. ► Supported our team of 4 researchers with a self-provision data lake, an Athena query interface and a flexible data model setup to independently release new data products to our customers without the need of customized data pipelines. Here we use are using dynamic API routes based on our internal data catalog.

  • Tech Lead

    2018 - 2019

    ► Architected low-latency generic blockchain data pipelines for 5 different blockchains, capable for batch and stream processing, and that is fault-tolerant, highly available and composable, with the help of AWS EC2, AWS S3, AWS Athena (later Presto), Apache Kafka, Kafka Connect, and Apache Kafka Streams and RocksDB state stores. Some parts are open source now. ► Led and supported an agile team of 5 engineers in building fullstack MVPs in weekly cycles to test product market fit. We shipped more than 20 data products including interaction and visualization. Here, I also took care of product development, strategy and customer analytics. ► Supported our research team with a self-provision data lake, an Athena query interface and a flexible data model setup to independently release new data products to our customers without the need of customized data pipelines. Here we use are using dynamic API routes based on our internal data catalog.

  • Co-Founder, CTO

    2018 - 2019

    ► Merged with TokenAnalyst, a seed-funded startup in the blockchain analytics space providing on-chain exchange flows and network fundamentals primarily for crypto hedge funds and family offices. ► Acquiring requirements from customers to solve their hair on fire problem with smart contract monitoring and operations post ICO stage. ► Built a fullstack MVP including full node infrastructure, data extraction and interface with NodeJS and ReactJS for three industry customers to monitor their smart contracts built on the Ethereum blockchain.

  • Entrepreneur In Residence

    2018 - 2018

    ► World-wide deep-tech incubator Enterpreneur First. Backed by Greylock, Lakestar, Mosaic, Founders Fund, and the founders of Google DeepMind. ► Co-hort with 60 potential co-founder candidates from various backgrounds, research and deep-tech heavy, to work on non-consensus ideas. EF helped build over 140 technology companies, collectively worth over $1bn. ► Developing and validating ideas over several weeks. Pitching these ideas and work with future customers to continously iterate on the product. ► Pitching the product to and work with various VCs including Mosaic Ventures, Point Nine Capital, 1kx, ConsenSys Ventures.

  • Backend and Data Engineer

    2016 - 2017

    ► Peer-to-peer content distribution of live video conferences using patented peer-to-peer algorithm for more than 100.000 viewers simultaneously for customers like Microsoft, Facebook and Honeywell. ► Co-architected batch processing pipeline to generate reports based on telemetric data from over more than 100.000 viewer conference events based on Apache Kafka, Apache Spark and Azure Table storage. ► Co-architected resilient Websocket and HTTP API using Twitter's microservice framework Finagle on Microsoft Azure.

  • Data Engineer

    2014 - 2015

    ► Co-led the transition from a PHP-based monolith to Scala microservices. We adopted Scala in 2014 as our new tech stack together with Apache Spark and Apache Kafka. ► Initiated the transition away from RabbitMQ to Apache Kafka as our central data pipeline hub. ► Implemented an on-site news recommender system based Apache Spark and Collaborative filtering among other text mining and supervised learning algorithms.

  • PhD Researcher

    2013 - 2014

    After graduation with a master’s degree in Computer Science at the Carl-von-Ossietzky University in Oldenburg, Germany in 2013, I worked as a researcher in the Computational Intelligence group. Besides data mining and machine learning in the field of wind energy analysis and prediction, I gained experience in algorithm design and empirical experiments. Furthermore, I worked on WindML, an open source framework in Python for wind energy forecasting with machine learning approaches, and on MetaOpt, a library that optimizes black-box functions using a limited amount of time and utilizing multiple processors. ► Jendrik Poloczek, Oliver Kramer: Multi-Stage Constraint Surrogate Models for Evolution Strategies. KI 2014. Springer ► Jendrik Poloczek, Nils André Treiber, Oliver Kramer: KNN Regression as Geo-Imputation Method for Spatio-Temporal Wind Data. SOCO 2014. Springer ► Jendrik Poloczek, Oliver Kramer: Local SVM Constraint Surrogate Models for Self-adaptive Evolution Strategies. KI 2013: 164-175 Springer ► Oliver Kramer, Fabian Gieseke, Justin Heinermann, Jendrik Poloczek, Nils André Treiber: A Framework for Data Mining in Wind Power Time Series. DARE 2014.

Relevant Websites