Granica is pioneering data optimization for large-scale AI and analytics. Our unified platform enables enterprises to dramatically reduce storage and compute costs while accelerating data pipelines - unlocking faster, more efficient access to intelligence at petabyte scale.
At the heart of Granica's mission is the belief that data is the foundation of modern AI. Realizing its full potential requires rethinking how data is managed, moved, and made useful. That's why we bring together world-class talent in AI research, distributed systems, and business to tackle the hardest challenges in data at scale.
Backed by leading investors including NEA and Bain Capital Ventures, Granica is helping the world's most forward-thinking organizations build a smarter, more cost-efficient data layer for the AI era.
Smarter Infrastructure for the AI Era:
We make data efficient, safe, and ready for scale—think smarter, more foundational infrastructure for the AI era. Our technology integrates directly with modern data stacks like Snowflake, Databricks, and S3-based data lakes, enabling:
60%+ reduction in storage costs and up to 60% lower compute spend
3x faster data processing
20% platform efficiency gains
Trusted by Industry Leaders
Enterprise leaders globally already rely on Granica to cut costs, boost performance, and unlock more value from their existing data platforms.
A Deep Tech Approach to AI
We’re unlocking the layers beneath platforms like Snowflake and Databricks, making them faster, cheaper, and more AI-native. We combine advanced research with practical productization, powered by a dual-track strategy:
Research: Led by Chief Scientist Andrea Montanari (Stanford Professor), we publish 1–2 top-tier papers per quarter.
Product: Actively processing 100+ PBs today and targeting Exabyte scale by Q4 2025.
Backed by the Best
We’ve raised $60M+ from NEA, Bain Capital, A Capital, and operators behind Okta, Eventbrite, Tesla, and Databricks.
Our Mission
To convert entropy into intelligence, so every builder—human or AI—can make the impossible real.
We’re building the default data substrate for AI, and a generational company built to endure beyond any single product cycle.
WHAT YOU’LL DO
This is a deep systems role for someone who lives and breathes lakehouse internals, knows open source cold, and wants to push the limits of what’s possible with Delta, Iceberg, and Parquet at petabyte scale.
Build and scale the transactional core of our data platform. Design, implement, and optimize ACID-compliant data layers based on Delta Lake and Apache Iceberg, enabling reliable time-travel queries and seamless schema evolution on petabyte-scale datasets.
Accelerate metadata-driven performance. Develop high-performance services for compaction, caching, and metadata pruning that enable sub-50ms query planning—even across millions of file pointers.
Optimize data layout for maximum efficiency. Work deeply with Parquet and similar formats to improve column ordering, encoding strategies (e.g., dictionary and bit-packing), and indexing methods like bloom filters and zone maps—reducing I/O and boosting scan performance by 10x.
Develop intelligent, adaptive indexing systems. Collaborate with research to prototype indexing and partitioning strategies that automatically learn from access patterns and evolve over time—eliminating the need for manual table analysis.
Build robust, self-healing pipelines. Design and maintain data infrastructure that scales automatically across cloud storage platforms (S3, GCS, ADLS), with built-in observability, fault tolerance, and hands-off reliability.
Write maintainable, long-lasting systems code. Produce clean, test-driven code in Java, Scala, or Go, with clear documentation and architectural design that enables long-term extensibility.
Optimize for human impact. Your work will directly power faster, more reliable analytics and model training. When dashboards refresh instantly and insights surface in seconds—you’ll know your work made it happen.
WHAT WE’RE LOOKING FOR
4+ years building distributed data systems or working on the internals of lakehouse architectures
Deep expertise with Delta Lake, Iceberg, Hive Metastore, or similar open-source formats
Strong programming skills in Scala, Java, or Go with production-grade quality and testing
Experience with Spark, Parquet optimization, file compaction, and advanced query planning
Ability to optimize for both performance and reliability across cloud storage layers (S3, GCS, ADLS)
Excited by the opportunity to shape the next layer of AI-ready infrastructure from the ground up
WHY JOIN GRANICA
If you’ve helped build the modern data stack at a large company—Databricks, Snowflake, Confluent, or similar—you already know how critical lakehouse infrastructure is to AI and analytics at scale. At Granica, you’ll take that knowledge and apply it where it matters most…at the most fundamental layer in the data ecosystem.
Own the product, not just the feature. At Granica, you won’t be optimizing edge cases or maintaining legacy systems. You’ll architect and build foundational components that define how enterprises manage and optimize data for AI.
Move faster, go deeper. No multi-month review cycles or layers of abstraction—just high-agency engineering work where great ideas ship weekly. You’ll work directly with the founding team, engage closely with design partners, and see your impact hit production fast.
Work on hard, meaningful problems. From transaction layer design in Delta and Iceberg, to petabyte-scale compaction and schema evolution, to adaptive indexing and cost-aware query planning—this is deep systems engineering at scale.
Join a team of expert builders. Our engineers have designed the core internals of cloud-scale data systems, and we maintain a culture of peer-driven learning, hands-on prototyping, and technical storytelling.
Core Differentiation: We’re focused on unlocking a deeper layer of AI infrastructure. By optimizing the way data is stored, processed, and retrieved, we make platforms like Snowflake and Databricks faster, more cost-efficient, and more AI-native. Our work sits at the most fundamental layer of the AI stack: where raw data becomes usable intelligence.
Be part of something early—without the chaos. Granica has already secured $65M+ from NEA, Bain Capital Ventures, A* Capital, and legendary operators from Okta, Tesla, and Databricks.
Grow with the company. You’ll have the chance to grow into a technical leadership role, mentor future hires, and shape both the engineering culture and product direction as we scale.
COMPENSATION & BENEFITS
Competitive salary and meaningful equity
Flexible hybrid work (Bay Area HQ)
Unlimited PTO + quarterly recharge days
Premium health, vision, and dental
Team offsites, deep tech talks, and learning stipends
Help build the foundational infrastructure for the AI era
Granica is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Granica seeks a driven Enterprise Account Executive to lead enterprise sales and drive adoption of its innovative data optimization platform.
Granica is searching for a Systems Research Engineer (PhD) to advance scalable AI data infrastructure and deliver foundational innovations at the intersection of systems and machine learning.
The City of Fort Worth is hiring a Graduate Engineer to help design and manage water and wastewater infrastructure projects within a fast-growing urban environment.
Strang seeks a skilled Technology and Low Voltage Designer to lead integration and design of innovative low-voltage systems across multidisciplinary teams.
Program and install control systems for integrated AV solutions at AVI-SPL, combining technical expertise with collaborative project execution.
Lead innovative building projects as a Structural Engineering Leader at Kimley-Horn’s Seattle office, driving growth and excellence in structural engineering practice.
Act as an AI Solution Engineer at Booz Allen Hamilton, leading the design and optimization of knowledge frameworks and data pipelines for public health AI applications.
Lead and scale a high-performing DevOps team at Netwrix, driving secure and scalable Azure cloud infrastructure to protect sensitive data and digital identities.
Relativity Space is looking for a seasoned Staff Vehicle Structures Engineer to drive the design and production of flight-critical rocket structures for their Terran R vehicle.
Contribute as a JavaScript Engineer at Constructor, developing scalable, customer-facing data integration tools for a leading AI-powered ecommerce platform.
Vast seeks a Staff Mission Operations Engineer to lead spacecraft mission operations and drive successful execution within their pioneering space exploration team.
Visa is seeking an experienced Senior Director for Site Reliability Engineering to lead a large global team ensuring ultra-reliable, scalable payment systems.
Lead our cloud platform engineering strategy and operations as Director, driving innovation, security, and scalability for a leading public company.
Lead ENFRA's engineering projects and teams as Vice President of Technical Services, driving innovative energy solutions and operational excellence.
Exciting opportunity to join Sargent & Lundy as a Senior Instrumentation & Controls Engineer driving innovation in nuclear power plant digital and process control systems.