Find Research
Our World View
Industry Perspectives
Research Program
Parallax View Magazine
> stay tuned
View our collections of research around key subject areas:
>
>
>
>
ERP
>
>
>
>
>
>
>
SRM
>
>
WMS
 
 
Article
JDA's Journey to a Global Optimization Platform--Highlights from Focus 2018

The mood at this year's JDA Focus (their annual conference) was optimistic, coming off of healthy growth last year, with tons of new innovations, and major R&D investments underway. Here we cover the highlights, including their 'moon shot', the Autonomous Supply Chain.


Full Article Below -
Untitled Document


Explosive Growth Coming Soon?

Just before the opening keynote address at JDA’s annual conference, Focus, I sat next to one of their senior executives, and asked him rhetorically “Why isn’t JDA a $5B company, instead of a $1B company.” I contend that JDA has the richest and deepest pallet of functionality and expertise of any supply chain solution provider. In spite of that, their growth has been stagnant for many years. Right after that conversation, Girish Rishi, JDA’s CEO, gave his keynote address in which he said that JDA had grown by 11% last year (their best growth in a while) and boldly predicted that JDA will be a $10B company by 2025 … even more ambitious than the off-the-cuff target I had just proposed! To be honest, that qualifies as an audacious goal, requiring almost 40% CAGR for the next seven years. But I say “great, aim high.” If they achieve even half of that growth, it would be a truly amazing turnaround for this venerable firm.

The Autonomous Supply Chain, JDA’s ‘Moon Shot’

To achieve that kind of growth, Girish said JDA will spend $500M1 on R&D over the next three years on areas such as AI/machine learning, IoT, and other fronts. He said that creating an Autonomous Supply Chain is JDA’s moonshot. He went on to elaborate their view of four stages of digital supply chain maturity (the description/comments on each of these are our interpretation):

  • Visibility—This requires integration of data from a variety of systems across the supply chain, many of which are outside of your four walls. Connecting to, cleaning up, and making this data usable is a never-ending journey for companies.
  • Predictive Analytics—The right data, combined with the right algorithms, allows predictions about demand, supply, and disruptions. The earlier that planners and execution managers are alerted to potential issues, the more degrees of freedom they have to fix things at a lower cost.
  • Prescriptive—Various courses of action to respond to changes or disruptions can be prescribed by intelligent engines, based on various optimization algorithms and/or AI/machine learning.
  • Self-Learning Supply Chain—AI/Machine Learning can learn and improve over time, based on observing actions taken by human experts, as well as based on observing the results of various decisions (human or machine-driven). Over time, as trust in the system’s recommendations increases, a greater and greater portion of decisions can be made automatically by the system, without human intervention. Professionals can focus their time and effort on the decisions that require human creativity or knowledge.

Deep learning and AI can be employed in stages two through four above, becoming progressively more prominent in each.


Predictive Maintenance as an Analogy for Predictive Supply Chain

A conversation with Puneet Saxena (JDA’s GVP of Supply Chain Planning) helped me see how predictive supply chain is somewhat analogous to machine-learning-based predictive maintenance for machinery. Machine learning is being used with complex machinery, such as a mining truck, longwall shearer, or a manufacturing production line. These machines typically are peppered with thousands of sensors—for example, Joy Global’s Longwall System has about 7,000 sensors. A machine learning engine can be fed historical data, continuously capturing readings from these sensors, along with records and details of all the failures of those machines. The machine learning engine thereby uncovers the different patterns of changes to sensor inputs that preceded each type of failure. When it observes similar patterns in equipment out in the field, it can make predictions about failures, such as “pump #4AD37 is likely to fail in the next 2-4 weeks.” This can be invaluable, especially for operations that have a high cost of downtime. It provides the service organization time to bring parts, tools, and expertise on site (without incurring expedited delivery costs) and make the repair when it will have the least impact on production. In contrast, without predictive maintenance, the failure happens without warning and production is stopped until emergency parts and technicians can be flown in.


AI/ML for Predictive Supply Chain

We’re already seeing machine learning being used for predictive maintenance on machines (see sidebar). Similarly, supply chains can be viewed as big complex machines,2 with many moving parts … albeit usually a lot more complex than most physical machinery and with uncontrollable (but visible) external factors such as weather, traffic, and port congestion. Before you can do ‘predictive maintenance’ on your supply chain, it has to be instrumented with sensors and data feeds. These could be physical sensors, such as temperature sensors or RFID readers, but more frequently these will be feeds from various systems and data sources3 providing inputs about what is happening in and around the supply chain. As a generalization, the more variety of data being ingested, the better the results achieved by machine learning algorithms.4 With this broad set of data, the machine learning engine can start to learn and discover the patterns that are likely to predict disruptions in the supply chain, shortfalls in production, and signs that demand is going to deviate from the forecast.

JDA Labs in Montreal has been working with one of their customers, a leading tire manufacturer in Europe, to leverage millions of data points to predict disruptions that normal planning might miss. So even though the plan indicates no problems, the machine learning engine, using a much broader data set and its own learnings, may spot potential trouble and alert the organization.   

JDA’s Path to Global (End-to-End) Optimization

One axiom of supply chain optimization is that global optimization yields better results than local optimization. Traditionally, optimization is done within fairly siloed domains. A company may have one engine to optimize their private fleet, another to optimize their for-hire transportation, another to optimize warehouse operations, and another to optimize inventory levels and replenishment. Each of these may reach some level of optimization within their own domain, but not consider the impact on other domains. The transportation optimization may create delivery schedules that are suboptimal (or even not possible) to execute with available warehouse labor and capacity, for example. And that is only within one enterprise. It gets even more complicated when multiple trading partners and service providers (e.g. logistics) are involved, not to mention the everchanging realities on the ground.

Optimization engines generally have some kind of model of the domain they are optimizing, to incorporate things like lead times or process times, capacities and constraints, and optimization objectives. Supply chains are so complex that building a single monolithic model of an entire end-to-end supply chain is difficult, if not impossible.

Synchronized Optimization: An Incremental Approach to Boiling the Ocean

JDA is taking a different approach, developing domain-specific optimization engines that work collaboratively doing ‘synchronized optimization.’ They pioneered this concept in their Intelligent Fulfillment suite, which provides that kind of synchronized optimization between their warehouse and transportation systems. Within each of those domains (WMS and TMS),5 each engine is doing its own constraint-based optimization, i.e. optimizing a given set of objectives (cost, cycle times, service levels, etc.) within the constraints and availability of the given resources. In its initial plan, the TMS optimization engine may plan truck arrival times that do not work well for the labor force and capacity available in the WMS. Dock door reservation is the perennial ‘no mans land’ between these two systems. So, JDA has the WMS and TMS systems share each of their proposed plans, making back and forth adjustments, to reach a joint plan that is optimized within each domain and feasible across both domains. In a sense, the WMS can be viewed as an external constraint applied to the TMS and vice versa.

After that, the two systems (TMS and WMS) remain connected during execution via a publish and subscribe6 mechanism. That way if potentially impactful changes happen (such as a new large rush order is placed by a key customer, or some trucks delivering to the DC are going to arrive late), the other side can be alerted, potentially triggering a replan.

JDA is now taking that approach and expanding it to synchronized optimization across multiple domains and processes, and eventually across multiple enterprises. This is part of what they are developing within their new Luminate ControlTower (more on Luminate below) and it will include synchronization of optimization across WMS, TMS, Store Operations, and eCommerce fulfillment. It starts with JDA Demand-Fulfillment creating demand plans and constraint-aware fulfillment plans that span the entire operations of the company. If demand is for 500 pallets but the DC only has room for 200, the ControlTower will seek alternate means of fulfilling the demand. Or suppose a food distributor in Florida is moving product to their DCs and from there to customers’ restaurants when a hurricane is forecasted to hit. The ControlTower can orchestrate a response.

Messaging Backbone and Canonical Object Model

This kind of synchronized optimization requires a common messaging backbone between the components as well as a shared business object model.

For the former (messaging backbone), JDA has partnered with MuleSoft, who provides a cloud-based integration Platform-as-a-Service (iPaaS) which includes an Enterprise Service Bus and SOA capabilities, API and microservices tools (such as API Designer, API Manager, and Anypoint Connectors, including over 13,000 public APIs for enterprise, social media, and mobile apps), B2B/EDI, integration flow design, monitoring, analytics, and DevOps tools. Rather than build all this themselves, JDA made the decision to use MuleSoft so that JDA could focus all of their time and R&D resources on supply chain applications and accelerate time-to-market. This approach (using a partner for core platform integration) is in contrast to what the large ERP vendors have done; SAP, Oracle, and Infor have all built their own integration backbones.

For the latter (common object model), JDA has put significant effort into building a canonical data model7 for all the objects that need to be shared. This is not a shared database, but rather a set of shared object definitions. Using open APIs, each functional process translates its internal objects into the common shared objects. With this foundation of a loosely coupled architecture, JDA has the flexibility to add new offerings, modify existing offerings, and add new events and processes. The approach makes sense, given JDA’s heritage and diversity of systems.

JDA as a Platform

Girish said their autonomous supply chain moonshot is too ambitious to build all by themselves. Hence, JDA is working towards becoming a platform company—i.e. offering a complete set of APIs and development tools and resources to enable an ecosystem of partners to integrate existing applications and build new applications and engines on top of the JDA platform. This is roughly analogous to what Salesforce has done with Force.com. Starting next year JDA’s Focus user conference will be followed immediately by a JDA developer conference.

Building a successful platform (i.e. one that becomes widely adopted by developers and complementary solution providers) is an enormous undertaking. It will take a substantial investment (probably hundreds of millions of dollars over time) to build out a complete set of developer tools and resources, such as deep and rapid/responsive8 technical support, a variety of educational resources, a vibrant community, conferences and events, and so forth. Many other companies have had similar aspirations to attract many developers and solution providers to use their platform. But building a great platform by itself is not enough. You have to convince developers and solution providers to make the major investment required to actually build things on it. They have many other platforms to choose from and are probably only going to pick one or two. That can be a chicken-and-egg issue as application developers often don’t want to use a platform unless it gives them access to a lot of new customers, and customers may not embrace the platform until there is a rich collection of applications on it. Nevertheless, it is the right strategy, and we hope JDA succeeds.9

Figure 1 - JDA Platform Architecture

JDA shared an overview of their platform architecture (Figure 1). In the center is essentially a data lake, containing an end-to-end view of supply chain data from a large variety of sources, including:

  • Enterprise data—Both JDA and non-JDA applications (such as ERP).
  • Trading partner data—From suppliers, channel partners, customers, 3PL, etc.
  • Streaming data—IoT and GPS location data, such as vehicle locations, production status, etc.
  • Big data—Curated datasets such as weather, news, transaction logs, loyalty, etc.

On top of this, the JDA API Gateway provides access to this combined data. [Note: I did not get into a discussion with JDA of how all this diverse data is put into a common schema for consumption. This may be another place where the canonical model is applied.] The API gateway allows access to all that data by JDA and partner apps in the JDA cloud or in a third-party cloud, as well as by applications running on edge devices.

Luminate

At Focus, JDA launched Luminate, their brand name for a new portfolio of nine applications based on new technologies including AI/Machine Learning, IoT/edge platforms, and advanced analytics. Six of the nine applications are added on top of existing JDA applications to provide enhanced functionality. Four of these six add-on applications use the existing single-tenant architecture of JDA’s heritage applications, but they will be hosted and managed by JDA and sold using a subscription licensing model (rather than the traditional perpetual license). The other two add-on Luminate applications are built using a multi-tenant SaaS model (what some of us refer to as ‘true SaaS,’ though there is not universal agreement on that).10 All three of the new standalone Luminate applications are also multi-tenant SaaS. The six Luminate applications that are designed to work with and enhance existing JDA applications are:

Single-tenant SaaS
  • Luminate Factory—Leverage IoT and other factory data to sense and predict deviations from planned production.
  • Luminate Strategic Planning—Integrated business planning leveraging machine leaning and non-traditional structured and unstructured data.
  • Luminate Transport—Via partners like FourKites and TransVoyant, provide real-time visibility and precise ETAs, anticipating disruptions. Diagnose causes and prescribe potential resolution.
  • Luminate Warehouse—Dynamic task optimization, updated as circumstances change throughout the day, using AI and machine learning with signals from IoT devices, the cloud, and the WMS.
Multi-tenant SaaS
  • Luminate Demand—Ingests additional external data that potentially provides insights into changes to demand. That data is fed into machine learning-based analytics, which recommend adjustments to the near term forecast, as needed.
  • Luminate Supply—Predict supply and prescribe fixes for supply issues, using SNEW,11 IoT, and other external data.

The three new standalone applications are:

Multi-tenant SaaS
  • Luminate ControlTower—Real-time end-to-end visibility, collaboration, and optimization (as we covered in the section above on JDA’s Path to Global (End-to-End) Optimization).
  • Luminate StoreOptimizer—Dynamically optimizing labor and inventory within a retail store, across inbound receiving, dynamic merchandising, maintaining hygiene of the shelf and planogram compliance, outbound and pickup in store order fulfillment, checkout, and customer service. JDA incorporates IoT data via Intel’s Responsive Retail Platform (store sensing edge platform/IoT gateway). JDA is also partnering with ReTech Labs using their video recognition technology for planogram compliance monitoring.
  • Luminate Assortment—JDA took their Retail.me capability, which uses machine learning to do buying-behavior-based persona clustering, productized it, and added additional analytics to understand attractiveness of products to various customer segments, and help prescribe per-cluster assortments.

New Customer Experience Centers—Doubling Down on Co-innovation

For quite a few years now, JDA has emphasized co-innovation with their customers. JDA Labs will not go beyond initial exploration of a new technology without one or more customers to take the journey with them, to ensure that whatever is developed is relevant and useful in a real-world setting. At Focus, they told us about their new Customer Experience Center (they are calling these “JXCs”), one in London that opened on April 18th of this year and a newer 13,200 square foot JXC at their global headquarters in Scottsdale, Arizona, that opened on May 15th. The JXCs are designed to be flexible spaces, with power and data anywhere on the floor and plenty of wall mounted displays. The office is staffed with developers who are part of JDA Labs. These are intended to be more than just demo centers. They are part lab and development space, and a place that JDA can not only show customers new innovations, but also have a dialog about what they are showing and get feedback and input from customers.

Blockchain

These days, all technology companies are required to have a blockchain strategy, because right now blockchain is the Next Big Thing du jour. Like most other enterprise solution providers, JDA is in the process of figuring out where this technology fits and what are the appropriate uses for it. They are in the process of looking at the various frameworks and players, building prototypes for simple use cases, and evaluating which uses cases might have legs. The folks at JDA Labs that I talked to said the use cases they are exploring seem to fit into four categories: collaboration, traceability, disintermediation, and digital supply chain. Of those four, they think traceability provides the most concrete uses cases for their customers for providing things like product provenance, proof of organic and fair trade, cold chain, and supply chain auditability. Time will tell which direction JDA (and a lot of other solution providers) ultimately settle on and the actual role blockchain (or its cousins) will play in the overall scheme of things.

Encouraging Signs

JDA and the companies it has acquired (i2, Manugistics, RedPrairie, and many others) have certainly had their share of ups and downs. As they say, it’s been a long and winding road. With their recent growth, major investments, and energized leadership, things are looking up right now. It feels like it’s a good time to be at JDA and to be one of their customers.

_________________________________________________

1 If they maintain their current growth rate, then $500M spent on R&D over the next three years would equate to about 13%-14% of revenue, a healthy rate of investment for a company of JDA’s size. That is slightly more than percentage of revenue that Microsoft spends on R&D and nearly the percentage that Google spends. -- Return to article text above

2 Due to this complexity, a better analogy might be a complex ecosystem of interacting entities, like a forest or coral reef. -- Return to article text above

3 There are an enormous variety of data sources relevant to achieving visibility in a supply chain, including your own enterprise’s systems, trading partners’ systems, and numerous third party data sources such as weather, AIS data tracking the location of ships, and social media (which may provide early warning on disruptive events, such a riot, closed down highway, a big fire, or emerging conflicts). -- Return to article text above

4 Machine learning thrives on data, so generally the more sources and variety, the better the predictions, provided the data meets certain minimum quality thresholds. It is not always clear ahead of time which signals/data will add value, making the prediction more accurate. System designers will often take an educated guess to create a broad preliminary set, but let the machine learning algorithm tell them which of those data sources are useful, after it has gone through many iterations of training and learning. -- Return to article text above

5 WMS = Warehouse Management System, TMS = Transportation Management System -- Return to article text above

6 Publish and subscribe is when one system (the publisher) generates events, typically at unpredictable intervals, and the other system (the subscriber) is notified about each of those events. Publish and subscribe middleware allows this to happen efficiently between many different systems wishing to generate and receive event notifications. Publisher systems don’t have to know or care who is receiving the event. Subscribers can subscribe just to the specific events that they need to know about, so they are not bombarded with irrelevant (for them) event notifications. -- Return to article text above

7 A canonical data model is a common shared model that each application translates into and out of. If interested in more on how a canonical model works and is actually used, see Canonical Model, Canonical Schema, and Event Driven SOA. -- Return to article text above

8 Developers need a different kind and level of support. They will not put up with being on hold, then talking to a junior person who simply goes through their script of questions, and then being put through another three levels of escalation before finally talking to someone who actually might have the knowledge required to help solve the thorny technical problem they are wrestling with. Especially if, while on hold, rather than some good hold music, like Miles or Coltrane or Bach, they are forced to listen to a loop of sales pitches by the solution provider on what other products they might be interested in! -- Return to article text above

9 If JDA does succeed in attracting a critical mass of developers, partners, and customers, it will be interesting to see what happens with their MuleSoft partnership, since MuleSoft has been acquired by Salesforce.com. That may depend in part on whether or not Salesforce starts to view JDA as a competitor. Today Salesforce and JDA are almost entirely complementary, not competitors, and that might still be the case in the platform space, as they may be attracting different communities of developers. -- Return to article text above

10 The debate on which is better, single-tenant vs. a multi-tenant architecture, continues even today. There are clearly pros and cons for each, both for customers and for the solution provider. In practice, multi-tenancy is prevailing. For years, almost every new startup has opted to go through the considerable extra effort required to build a multi-tenant application. Furthermore, practically every major enterprise solution provider has either already converted to a multi-tenant architecture for all new development or is headed in that direction. -- Return to article text above

11 SNEW = Social, News, Events, and Weather -- Return to article text above


To view other articles from this issue of the brief, click here.




MarketViz powered.