Data Scientist with DevOps experience for new Artificial Intelligence team
Do you want to be part of the on-going revolution within hearing devices that will improve the quality of life for millions of hearing-impaired people? Do you want to utilize the cloud for everything from high-performance computing to setting up of efficient data pipelines? Do you want to build the data foundation for the future?
What we offer
Widex recently launched “SoundSense Learn” - the first ever AI feature within the industry - as part of the new Widex Evoke™ platform. SoundSense Learn relies on machine learning together with reinforcement learning to perform “AI at the edge”. SoundSense Learn can learn the end-user's preferences in the current environment fast and reliably and adjust the hearing aid sound processing accordingly. At Widex we have ambitious goals within the field of AI. We have therefore launched a dedicated AI team which brings together experts in machine learning and data-science. The team has a high degree of start-up culture with a “fail-fast” and “develop as we go” mentality.
We offer the opportunity for talented candidates with a solid practical background within data science or cloud computing to join the newly formed AI team. The team is composed of machine-learning and data-science experts with sufficiently diverse skills to cover everything from theoretical tasks with pen and paper to implementing data pipelines and models for product releases.
Your primary goal will be to organize and build data streams, develop and maintain tools and frameworks for data science, and to conceptualize, develop, test and refine data-driven solutions, in support of our future generations of advanced hearing aids and services relying significantly upon AI.
Your daily tasks include:
Breaking down complex tasks and planning your daily work with the team
- Participating in inventing, proof-of-concept and prototyping of cutting-edge algorithms and machine learning applications
- Implementation and deployment of proprietary machine learning models to fit infrastructure
- Implementing analytics tools for analyzing real-world data sets for valuable insights
- Data engineering tasks such as building pipelines and infrastructure, enriching and leveraging accessibility of data to entire organization, with the purpose of extracting structured value from unstructured data
Furthermore, you will be involving stakeholders across the organization, i.e. other R&D departments, IT, etc.
You have always liked working with software and tools and enjoy learning new frameworks and best practices. You have a background working with data, including everything from storing, organizing and processing it. You can exemplify successes within your area of expertise, preferably in relation to product development. You work comfortably and efficiently in teams, towards a common goal. You have a go-do attitude and rather want to build small proof-of-principle prototypes or examples than to think everything up from the beginning. We are breaking new ground in the team, and are pretty much building everything as we go, so you need to be a fast learner and creative with modern tools and data pipelines. You hate to do the same things twice but prefer to automate as much as possible.
- B.Sc. or M.Sc. in computer science, data science or software development.
- Solid practical experience. Strong analytical mindset combined with an ability to build real-world solutions from real-world unstructured data.
- Experience with cloud-based frameworks of any provider, e.g., Microsoft Azure, Amazon AWS.
- Experience with modern NoSQL database technologies, e.g., Cosmos, DynamoDB, MongoDB.
- We do a lot of coding in many different frameworks and languages. Therefore, solid coding experience in various languages/frameworks and well-developed programming principles in general is an absolute plus.
- Proficiency in Python.
- Fluency in written and spoken English.
Desired qualifications within:
- Strong practical experience with large-scale machine learning approaches, e.g., frameworks for deep learning.
- Experience with open-source and cloud-based large-scale distributed computing frameworks for data science, e.g., Hadoop, Spark.
- Proficiency in C/C++, C# and Node.JS.
- Continuous integration experience, e.g., Jenkins, Travis, Bitbucket Pipelines, Docker, Kubernetes, etc.
Please submit your application as soon as possible, but no later than January 31st, 2019. We will screen and invite candidates for interviews on an ongoing basis. If you require further information about this position, please contact us via e-mail: Adam Westermann (firstname.lastname@example.org) or Jens Brehm Nielsen (email@example.com).
Please write in your application that you've seen the job at Jobfinder.