Data science roles bridge statistics, programming, and business intelligence. Demand is strong across all sectors (from fintech to e-commerce), with Python, SQL, and cloud data platforms (Snowflake, BigQuery, dbt) as the core stack.
Design Data Solutions: Leverage your analytical skills to design innovative data solutions that address complex business requirements and drive decision-making. Show less…
Read full description
Design Data Solutions: Leverage your analytical skills to design innovative data solutions that address complex business requirements and drive decision-making. Show less [{"type":"people_also_viewed","href":"https://ie..com/jobs/view/advent-geneva-business-analyst-at-kkr-4308776273","jobId":"4308776273","label":"Advent Geneva Business Analyst"},{"type":"people_also_viewed","href":"https://ie..com/jobs/view/sr-audit-data-analyst-at-trane-technologies-4328152493","jobId":"4328152493","label":"Sr Audit Data Analyst"},{"type":"people_also_viewed","href":"https://ie..com/jobs/view/business-analyst-at-nlb-services-4370637192","jobId":"4370637192","label":"Business Analyst"},{"type":"people_also_viewed","href":"https://ie..com/jobs/view/business-analyst-diageo-at-cpm-ireland-4371025447","jobId":"4371025447","label":"Business Analyst - Diageo"},{"type":"people_also_viewed...
This team specialises in Java-based data engineering, designing and delivering large-scale ELT/ETL workflows on a data lake house platform. You will be working with modern big data technologies to move, transform, and…
Read full description
This team specialises in Java-based data engineering, designing and delivering large-scale ELT/ETL workflows on a data lake house platform. You will be working with modern big data technologies to move, transform, and optimise data for high-performance analytics and regulatory reporting.
Data Product Delivery: You transform raw data into scalable data products. Unstructured Data & Classification: You implement solutions for the automatic classification and processing of unstructured data (such as ...
Aid the team in delivering continual data feeds to users and monitoring the status of the health of data quality and overall data ingest Visualize, interpret, and report data findings in dynamic reports Employ a variety…
Read full description
Aid the team in delivering continual data feeds to users and monitoring the status of the health of data quality and overall data ingest Visualize, interpret, and report data findings in dynamic reports Employ a variety of data manipulation and visualization tools to effectively convey current status and historical trends to leadership, users, and data team Collaborate with platform, software, and other data engineers to (re)configure data ingestion pipelines to be more reliable Comfortable working with data in a variety of formats including Excel, CSV, JSON, and XML Support the incident management process to ensure that incidents documented and resolved quickly. Process learnings and rely on historical data from data pipelines, translating them into actionable steps to improve data ingest.
Your mission: Manage data integration of multiple sources Further development of data pipelines, etl-processes, data warehouse, data-interfaces, reports, and bi-solutions (OLAP, Dashboards) Manage ad-hoc data analysis…
Read full description
Your mission: Manage data integration of multiple sources Further development of data pipelines, etl-processes, data warehouse, data-interfaces, reports, and bi-solutions (OLAP, Dashboards) Manage ad-hoc data analysis Create proof-of-concepts, prototypes, and spikes Proactively take care of code quality and testing Address non-functional requirements Your profile: Engineering mindset Professional knowledge in DWH / Data Models (dimensional, data vault, relational, NoSQL) Professional experience with ETL-Tools (Pentaho Data Integration or similar) Professional experience with SQL, DBMS (PostgreSQL) Experience with BI-Tools (QlikView / QlikSense or similar) Know-How in software engineering concepts like Scrum, CI/CD, TDD Nice to have: Experience in Linux / Bash Experience in Cloud Environments / Docker Knowledge in Python Knowledge in Java German language skills What we offer: Work life balance: flexible working hours with the possibility of home office (approx. Salary: the offered gross yearly salary starts at 45.000 Euro excluding overtime, depending on qualification and experience Show less [{"type":"people_also_viewed","href":"https://at..com/jobs/view/data-ai-front-end-engineer-all-genders-at-vienna-insurance-group-vig-4346054072","jobId":"4346054072","label":"Data/AI Front End Engineer (all genders)"},{"type":"people_also_viewed","href":"https://at..com/jobs/view/data-analyst-m-f-d-at-four-paws-4369843018","jobId":"4369843018","label":"Data Analyst (m/f/d)"},{"type":"people_also_viewed","href":"https://at..com/jobs/view/data-analyst-m-f-d-at-four-paws-4314782269","jobId":"4314782269","label":"Data Analyst (m/f/d)"},{"type":"people_also_viewed","href":"https://at..com/jobs/view/platform-enginee...
In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data…
Read full description
In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions.
In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data…
Read full description
In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions.
In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data…
Read full description
In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions.
Main responsibilities: Controlling and reporting of costs and income on allocated Release projects (Actual vs. Latest Forecast) Owning the preparation and processing of intercompany timesheet- and (project/non-project)…
Read full description
Main responsibilities: Controlling and reporting of costs and income on allocated Release projects (Actual vs. Latest Forecast) Owning the preparation and processing of intercompany timesheet- and (project/non-project) cost invoicing in Microsoft Dynamics 365 Support with the determination and maintenance of Release corporate, project and employee cost rate masterdata in Microsoft Dynamics 365 Initiate and take the Release lead on projects related to the development, structure and improvement of Release’s management reporting processes, including system improvements and changes (Microsoft Dynamics 365, PowerBI, other relevant systems) Define, design and implement internal project controlling and reporting processes and documentation of such process descriptions Act as business partner to the financial accounting and reporting function on project-related matters impacting financial reporting compliance Business partnering with different internal stakeholders outside of the Release finance function (Procurement and Logistics, Project Managers, Business Development team, etc.) on project and financing related matters...
Provide independent data science, machine learning, and analytical insights using member, financial, and organizational data to support mission critical decision making for various areas of the organization.…
Read full description
Provide independent data science, machine learning, and analytical insights using member, financial, and organizational data to support mission critical decision making for various areas of the organization. Responsibilities Conduct model validations including evaluating model conceptual soundness, assessing data appropriateness, reviewing statistical and analytical testing, outcome evaluation and performance monitoring Provide subject matter expertise to identify models’ key assumptions, limitations and weaknesses, and recommend practical solutions to mitigate model risk Develop benchmark/challenger models to assess strength of model under review Prepare and present clear, thorough reports to model developers and model owners explaining the analysis performed, results of the analysis and recommendations for improvement Design, develop, and evaluate large and complex predictive models and advanced algorithms if needed Test hypotheses/models, analyze, and interpret results Develop actionable insights and recommendations Develop and code complex software programs, algorithms, and automated processes Use evaluation, judgment, and interpretation to select right course of action Work on problems of diverse scope where analysis of information requires evaluation of identifiable factors Produce innovative solutions driven by exploratory data analysis from complex and high-dimensional datasets Transform data into charts, tables, or format that aids effective decision making Utilize effective written and verbal communication to document analyses and present findings analyses to a diverse audience of stakeholders Develop and maintain strong working relationships with team members, subject matter experts, and leaders Lead moderate to large projects and initiatives Model best practices and ethical AI W...
Job portal All vacancies PhD fellowship in Health Inequality Copenhagen Health Complexity Center Department of Public Health Faculty of Health and Medical Sciences University of Copenhagen A three-year position as a PhD…
Read full description
Job portal All vacancies PhD fellowship in Health Inequality Copenhagen Health Complexity Center Department of Public Health Faculty of Health and Medical Sciences University of Copenhagen A three-year position as a PhD Fellow is now available at the Copenhagen Health Complexity Center, Department of Public Health, University of Copenhagen. The Center leverages multi-layered nationwide social and health data combined with an interdisciplinary fusion of methods spanning data science, epidemiology, ethnography, econometrics, and systems science.
Salary tags blend employer provided ranges with Catalitium estimates. We mark ranges with Est. labels, note any missing data, and never inflate compensation to boost clicks.
Currency harmonized to USD/EUR/GBP/CHF to avoid surprises.
Outliers are reviewed manually before they appear on a card.
Sponsored employers follow the same disclosure and pay rules.
Yes. Remote-friendly AI and ML roles in the EU have grown over 30% year-on-year. Germany, France, the Netherlands, and Spain lead in volume. Use the AI and EU filters together to surface them quickly, and check the salary estimate badge to ensure the range meets your expectations.
How fresh are the job postings?
Listings are refreshed continuously from employer feeds and normalised daily. Each card shows a posted date pill so you can see exactly how old a listing is. Jobs posted within the last 7 days receive a green New badge. Listings older than 30 days receive a May be filled warning.
Do roles include salary estimates?
Yes. Most listings show an Est. salary pill derived from Catalitium's location-based salary database, blended with any employer-disclosed range. Senior and lead roles receive an automatic seniority uplift. If a salary range is genuinely unknown we leave the field blank rather than show a misleading estimate.
What is a ghost job and how do I spot one?
A ghost job is a listing that has been live for 30+ days and is likely already filled, on hold, or was never a real opening. Research suggests up to 40% of active listings at any time are ghost jobs. Catalitium flags every listing older than 30 days with a triangle May be filled badge so you can prioritise your energy on fresh openings.
What are the highest-paying tech roles right now?
AI and ML engineer roles currently command the highest median salaries on Catalitium, around $150k–$200k USD in the US and EUR 100k–EUR 160k in Europe. Principal and Staff Engineer roles come close, followed by senior full-stack and cloud infrastructure engineers. Use the >100k filter to see only high-compensation listings.
How do I negotiate a higher salary offer?
Reference Catalitium's salary data when negotiating: show the employer the market range for your role and region. Studies show engineers who negotiate receive 10–20% more than the initial offer on average. If base salary is fixed, push on equity, signing bonus, remote allowance, and learning budget. See our Salary Negotiation Guide in the Resources section.
Which European cities pay the most for tech?
Zurich and Geneva (Switzerland) consistently top European tech salaries, followed by London, Amsterdam, Berlin, Paris, and Stockholm. Swiss salaries are typically quoted in CHF and translate to EUR 100k–160k for mid-senior roles. London follows at GBP 70k–110k. Berlin and Amsterdam are competitive at EUR 70k–100k for comparable experience levels.
Can I track my job applications on Catalitium?
Yes. Our free Application Tracker lets you move roles through a Kanban pipeline: Applied, Phone Screen, Interview, Offer, and Closed. It requires no account and stores everything privately in your browser. Hit the Track button on any job card to add it. You can also export your full pipeline as a CSV.
How does Catalitium differ from LinkedIn Jobs?
LinkedIn optimises for engagement and premium upgrades. Catalitium is built exclusively for tech candidates who want signal over noise: every listing shows salary estimates, ghost jobs are flagged, AI-powered summaries save you time reading descriptions, and the application tracker replaces the black-hole Easy Apply experience. No premium paywall, no recruiter spam.
Can I filter to remote-only jobs?
Yes. Choose Remote in the location/country field or tap a Remote shortcut chip. Results are limited to roles that advertise remote or hybrid where the listing text supports it, and remote-friendly rows show a Remote badge.
Which tech stacks are most in demand?
Across Catalitium tech listings, Python, TypeScript/JavaScript, Go, Java, and cloud platforms (AWS, GCP, Azure, Kubernetes) recur most often; AI and data roles add PyTorch, TensorFlow, and LLM tooling. Title and AI summary chips reflect the employer's stated stack.
Which Swiss cities pay the most for software roles?
Zurich and Geneva typically lead Switzerland for software, data, and platform engineering compensation; smaller hubs follow at a discount. Swiss ranges often sit above neighbouring EU markets for comparable seniority—check Est. salary on each card when you filter by Switzerland.
Save this search and get a weekly digest of top matches.
Includes salary signal. No spam.