Python Data Science Engineer
We build the testing platform used by our customers (Facebook, WhatsApp, Microsoft) and 20,000 professional testers around the world.
Why work with us?
- You’ll be working in a fast feedback and deploy environment
- Deploy multiple times a day to production
- DevOps Environment: Kubernetes, Prometheus, Grafana, Graylog, Sentry, NewRelic, GitLab, Slack & JIRA
- You’ll use the latest technology and practices
- Backend: Python, S3, Lambda, Kinesis, Spark, Redis, PostgreSQL on RDS
- You’ll grow personally and professionally
- Leadership: Mentoring, Personalised development plans, 1:1s
- Team: Friday tech talks, Retrospectives, knowledge sharing
- Quickstart: Bootcamp, Buddy System, First-day release to production.
Compensation & Benefits
- Negotiable salary, but within 15000 – 22000 PLN net a month on a VAT invoice (B2B), depending on your experience and quality of your code
- Paid holidays (all holidays in your country + up to 20 days)
- Permanent contract with a 6 months trial period
What You'll be doing:
- You will design and implement highly scalable data processing software, ideally in Python in Spark, AWS Glue, Lambda, Kinesis and more. Building solutions to turn billions of data points from our testers into actionable insights.
- We just started working on a new product and You will take a meaningful part in our data science efforts.
- Solving various interesting problems around tester management, tester scoring, semi-automated test execution, spam activity detection (also ML, natural language processing, neural networks down the line if you’re interested)– suggesting technologies that fit the problem at hand
- Designing and implementing new algorithms data process with modular, secure and well-tested code which has a clear separation of responsibility
- You'll work closely with other engineering teams and data scientist so you can pioneer new technologies
- You'll improve the team and company – you will be an active participant in our culture (mentorship for less experienced developers, interviewing, and new initiatives)
- Min 3 (4.5 senior engineers) years of commercial experience
- Have built highly scalable and robust systems in the past– designing and implementing complex applications (code complexity and data model complexity)
- Experience with defining data pipeline that ingests data. Data migration, transformation and scripting
We don't expect someone to tick every box. We are willing to train the right person who wants to learn.
- Expert knowledge of Python way & Python universe
- Knows basic Data Science Python tooling (numpy, scikit, pandas)
- BigData Platforms (Apache Spark, Hadoop, AWS Glue)
- Engineering knowledge to write good Quality code, which is testable.
- Knowledge about Event Architecture (Kinesis, Kafka, RabbitMQ, CQRS, Event Sourcing, eventual consistency)
- Good experience with AWS infrastructure (AWS Glue, Kinesis, Lambda, AWS EMR, S3)
- Learns quickly. Demonstrates an ability to quickly and proficiently understand and absorb new information.
- Proactive attitude. Generates new and innovative approaches to problems. Think outside of the box.
- Focus on delivering and self-management
- Good written and spoken English communication skills (a must at B2 level, prefered C1)
- Data Formats: Avro, ProtBuf, Parquet
- Machine Learning Frameworks (Tensorflow or H2O)
- Experience with AI (ML, NLP, Neural Networks of various types, Swarm Intelligence, Genetic Algorithms, etc.)
- Statistical Model Evaluation & Analysis
- You can work from a modern office in Kraków in Zabłocie area with table football with partial remote option
- Our dedication to helping you become successful.
- Responsibility and a real say in the future of the company.
- You’ll have a role that will be vital to the company’s growth and every idea you come up with will always be carefully considered