<aside>
<img src="attachment:0728a433-d3ca-4ea5-a398-53f4ae3eca66:slng_ai_logo.jpg" alt="attachment:0728a433-d3ca-4ea5-a398-53f4ae3eca66:slng_ai_logo.jpg" width="40px" />
SLNG is building the backbone for real-time speech AI, enabling developers to run voice applications anywhere in the world with local compliance and ultra-low latency.
Our founding team comes from the core of AI and developer tooling in the USA, with experience scaling platforms trusted by the world’s best builders. Meet Luke, Founder & CEO, and Ismael, Founder & CPO at SLNG.
We’re bringing that San Francisco mindset to Europe with our first hub in Barcelona, and we have the support of leading international VCs and Angels who are backing the SLNG journey.
Our mission is simple: make AI voice relevant and available for the rest of the world.
Challenge → Today, most speech infrastructure is US-centric, slow, and difficult to deploy globally. SLNG is changing that by delivering a platform that is:
- Local: deployed close to users and compliant with regional regulations
- Fast: designed for real-time voice experiences
- Open: built to integrate with the tools and workflows developers already use.
From real-time transcription to voice AI applications, SLNG is creating the Voice AI gateway that will empower the next generation of speech-powered products.
🎙️ As Speech Model Performance Engineer you’ll work closely with Ismael, and the founding tech team to shape the technical foundation of SLNG. From TTS voices to multilingual ASR, you’ll benchmark, optimize, and productionize speech inference at scale.
</aside>
<aside>
💸
What do we offer?
- We pay top of market → We want to attract and retain serious talent, and we benchmark compensation accordingly.
- Equity → All full-time, permanent roles include stock options — we want everyone to share in the upside as we build.
- Hybrid by design → 3 days/week in the Barcelona office for collaboration and culture.
- Flexible benefits: Health insurance, gym, and more via Cobee.
- L&D: Annual budget (up to €1000) for training, courses, or conferences.
- Remote work support: Monthly stipend (up to €50) to cover wifi or other costs.
- Great equipment: Laptop of your choice plus €500 to set up your workstation in year one, with top-ups in following years.
- Time off: Additional +5 days vacation on top of the statutory minimum
</aside>
<aside>
✨ About The Role →
You’ll make models fast.
Example Initiatives
- Profile GPU utilization with CUDA kernels to identify bottlenecks in large-scale inference.
- Benchmark ASR models across dialectal variations, measuring Word Error Rate (WER) and latency trade-offs.
- Implement continuous batching and KV-cache reuse for streaming inference.
- Quantize neural TTS and STT models to run with minimal latency on heterogeneous GPU hardware.
<aside>
❓
Requirements
- PyTorch/ONNX experience.
- Familiar with GPU profiling and optimization
- It’d be great if you have a background in ASR/TTS but is not required
- Fluency in English
</aside>
</aside>
<aside>
♻️
Hiring Process (2 weeks or less)
- Intro call with one of the founders and Sofia, our Talent Partner.
- Technical or role-specific exercise
- Panel conversation with Co-founder
- Final discussions
- Job offer 🔛
</aside>
<aside>
👩🏼💻
Ready to move forward? Reach out to Sofia here or Book the first call here
</aside>