Description
Build the Future Workforce
Wand turns AI into labor. It enables humans and AI agents to operate together as a unified, hybrid workforce, with comprehensive management and oversight. And it’s already operating at scale inside some of the world’s largest organizations.
Wand built the world’s first Agentic Labor Infrastructure enabling governments and global enterprises to create, manage, and scale digital workforces.
Our mission is to integrate agent ecosystems into the core of work and business, unlocking a generational leap in the global economy. We’re building the infrastructure that lets humans and AI agents operate together safely, transparently, and at scale.
Join Wand in leading the Agentic Shift
Wand is building a high-performing global team who take full ownership of what they build. We lead by example, move fast, make data-aware decisions, and continuously push for more- always with a focus on delivering real value to customers.
You would be joining a world-class team that combines deep research expertise and real-world product execution, with experience spanning Deepmind, Google, Amazon, Miro, Elise AI, IBM and Accern.
Requirements
Position Summary:
This role is ideal for a hands-on senior engineer with 10+ years of experience building information retrieval and data integration systems, especially in a high-paced startup environment. You will own the design, development, and maintenance of the context management layer that powers our AI agents. You will ensure they can access, retrieve, and reason over the right information from diverse enterprise sources.
Our tech stack includes:
- Backend: Python, FastAPI, BlackSheep, Temporal
- Data & Search: Elasticsearch, MongoDB, PostgreSQL, Redis, ClickHouse, Snowflake
- AI/ML: OpenAI, LangChain, LangGraph, LiteLLM, Docling
- Messaging: RabbitMQ, Kafka
- Cloud & Infra: Azure, Docker, Kubernetes
Responsibilities:
- Design and build scalable search and retrieval systems combining lexical and semantic approaches.
- Develop and maintain connectors to enterprise data sources (SaaS platforms, data warehouses, document stores, APIs).
- Build data pipelines that ingest, transform, and index customer data for use by AI agents.
- Integrate with LLM providers and related frameworks (e.g., LangChain, LlamaIndex) to deliver context-aware agent capabilities.
- Pull and process analytics data from customers' warehouses (Snowflake, BigQuery, Databricks, etc.).
- Own projects end-to-end: from architecture and technical design through implementation, deployment, and ongoing maintenance.
- Collaborate with product and AI teams to translate retrieval quality into measurable agent performance improvements.
- Optimize retrieval pipelines for latency, relevance, and cost efficiency at scale.
- Uphold a culture of high efficiency, creativity, and quality.
Key Qualifications:
- Degree in Computer Science, Engineering, or a related field.
- 10+ years of engineering experience with a focus on search, information retrieval, or data engineering.
- Strong proficiency in Python; willingness to work in additional languages as the stack evolves.
- Hands-on experience with search technologies (Elasticsearch, vector databases such as Pinecone, Weaviate, Qdrant, or similar).
- Solid understanding of embeddings, semantic search, and retrieval-augmented generation (RAG) patterns.
- Experience building and maintaining data pipelines and ETL/ELT workflows.
- Familiarity with at least one major data warehouse platform (Snowflake, BigQuery, Databricks, Redshift).
- Experience working with LLM APIs and agent frameworks in production.
- Proficiency in at least one cloud environment (GCP, AWS, Azure).
- Proven track record in a high-paced startup environment.
- Self-sufficiency across the stack, comfortable operating without dedicated DevOps support.
- Experience with containerized environments (Docker, Kubernetes).
Preferred Experience:
- Background in building enterprise SaaS integrations or source connectors at scale.
- Experience with chunking strategies, re-ranking models, and hybrid retrieval approaches.
- Familiarity with data governance, access control, and multi-tenant data architectures.
- Contributions to open-source search or retrieval projects.
- Experience with production systems serving enterprise customers.
Personal Characteristics:
- Strong individual contributor comfortable owning major projects with minimal oversight.
- Thinks architecturally: balances long-term design quality with startup speed.
- Excellent communication and interpersonal skills.
- Continuous drive for improvement and innovation.