About BigGeo
BigGeo is redefining geospatial intelligence with an AI‑ready Discrete Global Grid System (DGGS) that transforms how spatial data is captured, indexed, and monetized. Our platform powers mission‑critical decisions across sectors where location intelligence drives outcomes—from large‑scale infrastructure projects and environmental planning to logistics and emergency response. We are industry agnostic, unlocking possibilities for organizations that have yet to realize the value a system like ours can deliver.
Backed by Vivid Theory, a venture studio dedicated to building transformative technologies, we’re a multidisciplinary, entrepreneurial team built for impact. We work quickly, push boundaries, and expect every team member to be both a thinker and a doer.
The Opportunity
We’re seeking a Senior Platform Engineer focusing on high‑performance backend systems using modern statically compiled languages. This role emphasizes building reliable, secure, and performant infrastructure that powers our product offerings. If you’re a developer who thrives on creating high‑performance, observable systems and isn’t afraid to dive deep into low‑level optimizations while building reliable platform services, we want to hear from you!
Primary Responsibilities
- Design and implement efficient, reliable, secure, and observable backend systems
- Optimize code for performance and resource utilization
- Contribute to architectural decisions for distributed systems and big‑data processing
- Write and maintain observable, instrumented code that enables effective system monitoring
- Lead the development of complex platform features
- Design and implement scalable data architectures
- Conduct thorough performance testing and optimization
- Mentor junior developers, promote and enforce best practices
- Lead initiatives to align platform development with business objectives, ensuring that all platform functionalities contribute positively to key outcomes and KPIs
- Facilitate a smooth transition of platform features to product teams, supporting seamless integration and effective use within product pipelines
- Continuously evaluate and optimize the platform to enhance user experience and deliver measurable business value, supporting overall company growth objectives
- Assume full ownership and accountability for strategic technology domains, with the ability to articulate their business value and organizational impact
- Drive DevOps practices and automation initiatives
- Monitor and analyze technical performance of internal systems
- Leverage existing CI / CD pipelines and tooling for efficient deployment workflows
- Support deployment and operational excellence
- Contribute to infrastructure‑as‑code initiatives
Requirements
Bachelor’s degree in Computer Science, Software Engineering, Data Science, or a related field (or equivalent practical experience)Proven track record in high‑performance backend developmentProficiency in modern statically compiled languagesStrong understanding of immutability principles and their applicationExpertise in writing efficient, reliable, and secure codeProficient with both manual memory management and automatic lifetime management techniquesStrong understanding of computer architecture and efficient utilization of available resourcesStrong knowledge of fundamental data structures and algorithmsUnderstanding of performance trade‑offs between algorithmic efficiency, distributed systems coordination, and I / O minimization in big data contextsExperience with modern observability patterns and practicesBackend Technology Stack RequirementsCore Languages & Frameworks
Experience with modern statically compiled languages (Go, Rust, C++, or similar)Familiarity with testing frameworks and benchmarking toolsUnderstanding of dependency management and build systemsDatabases & Data Storage
Strong experience with relational databases (PostgreSQL, MySQL)Proficiency with NoSQL databases (MongoDB, Redis, Cassandra)Experience with time‑series databases (InfluxDB, TimescaleDB, or Prometheus)Knowledge of database optimization, indexing strategies, and query performance tuningExperience with connection pooling and database driver optimizationMessage Queues & Event Streaming
Experience with Apache Kafka, RabbitMQ, or NATSUnderstanding of event‑driven architectures and pub / sub patternsKnowledge of message serialization formats (Protocol Buffers, Avro, MessagePack)APIs & Communication Protocols
Expertise in RESTful API design and implementationExperience with gRPC and Protocol BuffersKnowledge of GraphQL is a plusUnderstanding of API versioning, rate limiting, and authentication patterns (OAuth2, JWT)Container & Orchestration
Proficiency with Docker and containerization best practicesExperience with Kubernetes (deployment, scaling, service mesh)Knowledge of Helm charts and Kubernetes operatorsExperience with container registries and image optimizationCloud Platforms
Hands‑on experience with at least one major cloud provider (AWS, GCP, or Azure)AWS : ECS / EKS, Lambda, S3, RDS, ElastiCache, SQS / SNSGCP : GKE, Cloud Run, Cloud SQL, Pub / Sub, BigQueryAzure : AKS, Azure Functions, Cosmos DB, Service BusInfrastructure as Code
Experience with Terraform or PulumiKnowledge of configuration management tools (Ansible, Chef, or similar)Experience with GitOps practices (ArgoCD, Flux)CI / CD & DevOps Tools
Experience working with CI / CD platforms (Jenkins, GitLab CI, GitHub Actions, CircleCI)Ability to effectively leverage existing CI / CD pipelines and deployment automationKnowledge of automated testing strategies (unit, integration, e2e)Familiarity with build processes and deployment workflowsObservability & Monitoring
Experience with Prometheus and GrafanaProficiency with distributed tracing (Jaeger, Zipkin, or OpenTelemetry)Knowledge of structured logging practices and toolsExperience with APM tools (DataDog, New Relic, or Elastic APM)Understanding of SLIs, SLOs, and SLA definitionsVersion Control & Collaboration
Expert‑level Git proficiencyExperience with code review processes and branching strategiesFamiliarity with monorepo or microservices repository patternsNice to Haves
A Master’s degree or relevant certifications in Distributed Systems, Big Data Processing, or Cloud Computing is a plusExperience with Rust (with tokio.rs) or Scala (with cats‑effect) will be given top priorityExperience with Go (Golang) including concurrency patterns, standard library, and popular frameworksExperience with any modern statically typed language (C++, Java, Kotlin)Background in big‑data processing architectures (Spark, Flink, Hadoop)Experience with distributed systems and consensus algorithms (Raft, Paxos)Experience with high‑performance data structures and lock‑free programmingKnowledge of geospatial data structures and algorithms (PostGIS, H3, S2 Geometry)Expertise in optimizing I / O operations and understanding of Linux kernel internalsFamiliarity with binary protocols and efficient serializationExperience with distributed eventing systems (e.g., NATS.io, Pulsar)Experience with service mesh technologies (Istio, Linkerd, Consul)Knowledge of caching strategies (Redis, Memcached, CDN optimization)Experience with load balancing and reverse proxy configuration (Nginx, HAProxy, Envoy)Familiarity with security best practices and compliance frameworks (SOC 2, GDPR, HIPAA)Experience with performance profiling tools (pprof, flamegraphs, perf)Knowledge of WebAssembly (Wasm) and its applicationsContributions to open‑source projects or maintaining librariesExperience with chaos engineering and resilience testingPassionate about code efficiency, reliability, and securityProactive in finding ways to improve existing systemsEager to learn, mentor and teachStrong problem‑solving skills and critical thinkingExcellent communication and teamwork abilities#J-18808-Ljbffr