Why I Built This

I spend my days building AI systems. I've architected multi-LLM pipelines, shipped RAG implementations, and optimized models serving billions of requests. And I have a problem: I can't keep up with the research.

At Nomology, I lead AI product development. We use Claude, GPT, and Llama models to generate insights from billions of data points. When a new technique drops from Anthropic or DeepMind, I need to know if it changes how we should build.

But here's the reality: over 1,000 AI papers are published every month. Most are theoretical, incremental, or require infrastructure only Google has. The few that could actually help me ship better products? They're buried in noise.

I don't need someone to simplify papers for me. I need someone to find the right papers in the first place. Papers with code. Papers with clear implementation paths. Papers that solve problems I actually have.

So I built that filter.

My Background

I'm Yuriy Matso, Head of Product for Artificial Intelligence at Nomology, where I've been building AI-powered systems since 2019.LinkedIn

What I Build

  • Multi-LLM architectures using Claude, GPT, and Llama
  • RAG systems and vector database implementations
  • ML pipelines on Google Cloud Vertex AI and BigQuery
  • Agentic workflows for automated decision-making

Scale I Work At

  • Billions of data points analyzed
  • $50M+ in AI-optimized ad spend
  • Production systems serving real users daily
  • Rapid POCs to validate feasibility before building

Before Nomology, I led product at Channel Factory (managing 20 engineers and ML specialists), worked as a Project Manager at Sigma Software, and started my career as a Strategy Consultant at KPMG. I have a Master's in International Management from UC San Diego and a Venture Capital certificate from UC Berkeley.

I also build side projects to stay hands-on with new technologies. Futalgo is my algorithmic trading platform where I used Claude and GPT to develop trading systems across multiple languages and platforms.

How I Filter Papers

Every paper goes through the same questions I ask when deciding whether to implement something at Nomology:

1

Is there code or a clear path?

Does the paper include a GitHub repo? If not, is the architecture detailed enough to implement with Python and standard tools? No path to code = no coverage.

2

Is the improvement meaningful?

Papers love to report "state-of-the-art" results. I dig into the benchmarks. Is the improvement 2% or 20%? Does it enable something new, or is it benchmark gaming?

3

Does it run on real infrastructure?

Does it require TPU pods, proprietary datasets, or custom hardware? If it only works at Google scale, it doesn't help my readers.

4

Would I implement this tomorrow?

The ultimate filter: if I had this problem at Nomology, would I actually use this paper to solve it? If the answer is "interesting but not actionable," I skip it.

What Makes Tekta Different

Most AI newsletters summarize papers. Anyone can do that with ChatGPT. Tekta solves the harder problem: which papers are worth reading in the first place.

I scan thousands of papers to find the ~5% that have real implementation value. Papers with code. Papers with clear architectures. Papers that solve problems practitioners actually have.

Then I go deep. Every article includes an Implementation Blueprint: the tech stack, the parameters that matter, the production gotchas. The stuff that's never in the abstract.

What you get:

  • Rigorous curationOnly papers you can actually ship
  • Actionability filterCode or clear implementation path required
  • Practitioner perspectiveWritten by someone who implements, not just reads
  • Implementation BlueprintTech stack, code, parameters, pitfalls
  • Honest limitationsWhat the paper doesn't solve

See the Filter in Action

Curated research from Google DeepMind, OpenAI, Anthropic, and Meta AI. Only the papers you can actually ship.

Tekta: from Greek tekton, meaning builder.