sr. processing backend engineer jobs



What we need:
We are looking for a strong data processing engineer who enjoys being challenged with new ways of implementing data transformations. You should be proficient in both Ruby and Java with a dose of Kotlin. You enjoy working in both, since both will be part of your day-to-day. We are looking for a team player who can have spirited debates about architecture without ego. You will need to be flexible and work in an ever-changing environment. Finally, you will thrive if you enjoy owning your projects from beginning to end. Bonus points go to you if you have some experience with highly parallel job processing, job orchestration frameworks and generally being able to handle the complexities of data duplication in map-reduce architectures.

Our current stack includes: MySQL, Redis, Java, Kotlin,  Ruby, MongoDB, and Elasticsearch, all running in AWS.

What you’ll be doing:
  • You will work on our processing pipeline handling billions of documents and terabytes of data
  • You will work with parts of core pipeline to do data transformations, security, and ocr utilities to make data searchable
  • You’ll be responsible for writing clean, modular, maintainable code within our codebase
  • You are able to take a design/proposal and carry it through to a thoughtful and polished end result with good test coverage
  • Work with our amazing Customer Success team to make our customers love us with features and reported issues
  • You will review code written by other engineers and provide useful and honest feedback
  • You will help with architecting solutions to scale our proprietary data processing platform for 10x, 100x, and beyond. 
  • You will participate in on-call rotations and work effectively and collaboratively during site outages

What we need from you:
  • You have 2+ years experience with Java/Kotlin using the JVM and enjoy working with the JVM ecosystem
  • You have 2+ years of experience Ruby, or Python in a production environment
  • You know how to make them work and take joy in making them perform
  • You have a good understanding of highly distributed processing systems
  • You are comfortable navigating through complex data structures and algorithms, and have a strong desire to produce optimal code for speed and efficiency
  • You are familiar with the basics of scalable software design and architecture
  • You have excellent communication skills and the willingness to share your expertise
  • You are open minded and enjoy learning from others 
  • You are pragmatic and sensible in your approach to problem solving
  • You are able to think critically and gather data to constructively support your position
  • You thrive in fast moving environments without need for constant supervision
  • Logikcull’s mission and values speak to you and you feel could inspire you to do your best work
  • Your gif game is strong and you know just the right clip in any situation
  • [Bonus] You have experience with batch processing and scheduling of remote jobs in parallel
  • [Bonus] You have strong experience in map-reduce concepts and deduplication of data from incoming 
  • [Bonus] You have experience with data transformation pipelines and job scheduling frameworks (Airflow, Conductor, Spark)