BMC Software’s Basil Faruqui:
How to master your data and AI approach
Basil Faruqui, director of solutions marketing at BMC Software, talks on the value of data orchestration, data operations, and artificial intelligence in maximizing complicated workflow automation for corporate success.
What recent changes have occurred at BMC?
At BMC, especially with regard to our Control-M product line, these are exciting times as we continue to assist some of the biggest global corporations in automating and coordinating business outcomes that depend on intricate workflows.
Our approach has placed a lot of emphasis on DataOps, particularly on orchestration within the DataOps methodology. We have provided more than seventy integrations to serverless and PaaS options across AWS, Azure, and GCP over the past twelve months, allowing our customers to quickly incorporate contemporary cloud services into their Control-M orchestration routines. We are also accelerating workflow development and run-time optimization by developing GenAI-based use cases.
Which current DataOps trends have you spotted emerging?
What we are seeing in the Data world in general is continued investment in data and analytics software. Analysts estimate that the spend on Data and Analytics software last year was in the $100 billion plus range. If we look at the Machine Learning, Artificial Intelligence & Data Landscape that Matt Turck at Firstmark publishes every year, its more crowded than ever before. It has 2,011 logos and over five hundred were added since 2023. Given this rapid growth of tools and investment, DataOps is now taking center stage as companies are realising that to successfully operationalise data initiatives, they can no longer just add more engineers.
These days, data operations procedures are being used as a model for growing these projects to production scale. This operational approach will become even more crucial in light of the current explosion in GenAI.
What should businesses keep in mind while developing a data strategy?
CEOs, CMOs, CFOs, and other corporate executives continue to make significant investments in data efforts, as I previously said. This investment will lead to game-changing, transformative business outcomes in addition to incremental efficiencies.
This implies that three factors take on greater significance. The first step is to ensure that the IT teams are focusing on the most important aspects of the business by clearly aligning the data strategy with the goals of the business. Data quality and accessibility come in second; the former is vital. Inaccurate insights will result from low-quality data. Ensuring data accessibility, or having the appropriate data available to the appropriate individuals at the appropriate time, is equally crucial. Democratizing data access allows teams throughout the organization to make data-driven choices while also preserving the necessary safeguards. Reaching production scale comes in third. In order to avoid considering operations readiness as an afterthought after piloting, the strategy must guarantee that it is integrated into data engineering methods.
To what extent does a company’s overall strategy incorporate data orchestration?
How much of data orchestration is incorporated into an organization’s overall strategy?
Data orchestration is arguably the most important pillar of DataOps. Most businesses have data that is spread across multiple platforms, including legacy databases, cloud, on-premises, and third-party apps. Integrating and coordinating these many data sources into a unified, coherent system is crucial. By guaranteeing seamless data flow between systems, effective data orchestration reduces duplication, latency, and bottlenecks and speeds up decision-making.
What are the main challenges that your clients inform you of with regard to data orchestration?
The difficulty of quickly delivering data products and then scaling swiftly in production remains for organizations. One notable example of this is GenAI. Globally, boards and CEOs are demanding speedy outcomes because they believe those who are unable to fully utilize this technology could be severely disrupted. Prompt engineering, prompt chaining, and other techniques are becoming commonplace thanks to GenAI. The difficulty lies in integrating LLMs, vector databases, bots, and other components into the wider data pipeline, which spans a highly mixed architecture that includes mainframes for numerous applications and multiple clouds.
This merely serves to highlight the necessity of a strategic orchestration approach that would enable the integration of novel technologies and procedures for the scalable automation of data pipelines. According to one customer, Control-M functions as a kind of orchestration power strip that allows users to plug in new technologies and patterns as they develop without having to rewire each time they switch out outdated technology for more modern ones.
How can one ensure optimal data orchestration? What are your best tips?
There are many excellent suggestions, but I’ll concentrate on one here: interoperability between application and data operations. This is because, in my opinion, it’s essential to reaching production speed and scalability.
While orchestrating data pipelines is critical, it’s as important to remember that the enterprise’s greater ecosystem includes these pipelines. Let us examine an ML pipeline that is utilized to forecast which clients are most likely to migrate to a rival. Workflows from the ERP/CRM and a number of other apps combine to produce the data that enters such a pipeline.
Before the data workflows can be started, the application workflows frequently need to be completed successfully. We might need to return to the ERP and CRM’s application layer after the model identifies customers who are likely to switch, in order to send them a promotional offer.
Because our customers utilize Control-M to coordinate and manage complex dependencies between the application and the data layer, Control-M is well positioned to address this difficulty.
What do you think the biggest advantages and difficulties in implementing AI are?
Rapid technological advancements in the data ecosystem are being driven by AI, and more especially by GenAI.
numerous novel models, vector databases, and automation methods centered on prompt chaining, among other things. The data world is not new to this dilemma, but the rate of change is accelerating. From an orchestration standpoint, we see enormous potential with our customers since we provide them with a highly flexible orchestration platform, allowing them to integrate these tools and patterns into their current processes rather than starting from scratch.
Do you have any case studies of businesses using AI effectively that you could share with us?
Control-M is used by Domino’s Pizza to coordinate its extensive and intricate data pipelines. Domino’s oversees more than 3,000 data pipelines that channel data from various sources, including internal supply chain systems, sales data, and third-party interfaces. The company has more than 20,000 locations worldwide. Before this data from applications can be used to inform judgments on the quality of the food, the delight of customers, and the effectiveness of operations throughout its franchise network, it must first pass through intricate patterns and models of transformation.
Control-M plays a crucial role in orchestrating these data workflows, ensuring seamless integration across a wide range of technologies like MicroStrategy, AMQ, Apache Kafka, Confluent, GreenPlum, Couchbase, Talend, SQL Server, and Power BI, to name a few.
Beyond just connecting complex orchestration patterns together Control-M provides them with end-to-end visibility of pipelines, ensuring that they meet strict service-level agreements (SLAs) while handling increasing data volumes.
Control-M is helping them generate critical reports faster, deliver insights to franchisees, and scale the roll out new business services.
Beyond just connecting complex orchestration patterns together Control-M provides them with end-to-end visibility of pipelines, ensuring that they meet strict service-level agreements (SLAs) while handling increasing data volumes.
What can we expect from BMC in the year ahead?
Our strategy for Control-M at BMC will stay focused on a couple of basic principles:
Continue to allow our customers to use Control-M as a single point of control for orchestration as they onboard modern technologies, particularly on the public cloud. This means we will continue to provide new integrations to all major public cloud providers to ensure they can use Control-M to orchestrate workflows across three major cloud infrastructure models of IaaS, Containers and PaaS (Serverless Cloud Services). We plan to continue our strong focus on serverless, and you will see more out-of-the-box integrations from Control-M to support the PaaS model.
We recognise that enterprise orchestration is a team sport, which involves coordination across engineering, operations and business users. And, with this in mind, we plan to bring a user experience and interface that is persona based so that collaboration is frictionless.
Specifically, within DataOps we are looking at the intersection of orchestration and data quality with a specific focus on making data quality a first-class citizen within application and data workflows. Stay tuned for more on this front!
Specifically, within DataOps we are looking at the intersection of orchestration and data quality with a specific focus on making data quality a first-class citizen within application and data workflows. Stay tuned for more on this front!
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.