How to implement a data science strategy
Modern stack technologies such as big data platforms and cloud computing have had a great impact on the practice of data science. They have allowed practitioners to bring together a host of inter-operable tools and analytics capabilities. In turn, this helps deliver full-scale data science solutions, anchored on best practices and collaboration among key stakeholders.
So says Johnson Poh, Adjunct Faculty (Big Data Analytics), Singapore Management University, who will be delivering a keynote address at ITWeb's Business Intelligence & Analytics Summit 2019, to he held on 14 and 15 March at The Forum in Bryanston.
According to Poh, data practitioners are now more empowered to implement production-grade models, giving better clarity into the roles, processes, as well as advancements in technology permeating the entire data operations pipeline from ingestion to insights consumption.
He says the explosion of big data, coupled with advances in modern stack technologies, has enabled companies to leverage data science to generate business insights more quickly.
"It's no surprise that across industries, companies have begun taking on a data-driven approach by charting out long-term roadmaps and setting up data science teams within their organisations."
However, he says, bringing technology, processes and people together is easier said than done. His advice to business is to take a product-centric view, to begin to understand the specific components that are required in delivering an end-to-end pipeline for the practice of data science.
During his presentation, 'Implementing an effective data science blueprint with the modern stack', Poh will discuss how to implement a data science strategy, taking into account people, process and technology. He will also reveal what technical components are required to build an end-to-end data-driven pipeline.