Reading Time: 7 minutes KNIME can be used to develop customised components of Supply Chain Management, from Inventory plan to forecast solutions.
Reading Time: 5 minutes Established financial institutions need to target the right people at the right time. But truly seizing the opportunity in this space will require hyper-personalization. Most banking customers don’t think much about the industry unless they’ve reached a crossroads where they need a particular service. That means few people are actively looking to switch banks or are particularly susceptible to traditional marketing tactics. If banking customers Continue Reading
Reading Time: 4 minutes When the COVID-19 outbreak became a global pandemic, financial-markets volatility hit its highest level in more than a decade, amid pervasive uncertainty over the long-term economic impact. Calm has returned to markets in recent months, but volatility continues to trend above its long-term average. Amid persistent uncertainty, financial institutions are seeking to develop more advanced quantitative capabilities to support faster and more accurate decision making. Continue Reading
Reading Time: 4 minutes AI has traditionally been deployed in the cloud. AI algorithms crunch massive amounts of data and consume massive computing resources. But AI doesn’t only live in the cloud. In many situations, AI-based data crunching and decisions need to be made locally, on devices that are close to the edge of the network. At the Edge AI at the edge allows mission-critical and time-sensitive decisions to Continue Reading
Reading Time: 3 minutes In our previous post we have talked about different types of agents that can be built for business. Any type of agent (model-based, goal-based, utility-based, etc.) can be built as a learning agent (or not). Learning allows the agent to know more than what it initially started with in terms of the operating environment. Components of a Learning Agent The learning agent can be divided Continue Reading
Reading Time: 5 minutes In the previous post, we discussed the environment in which the agent operates and the characteristics of those environments. In this post let us talk about the types of agents and challenges of data set for the agents. All agents have the same skeletal structure. They get percepts as inputs from the sensors and the actions are performed through the actuators. Now the agent can Continue Reading
Reading Time: 4 minutes In this blog, we will try to understand the concept of Persistence in Apache Spark in a very layman term with scenario-based examples. Note: The scenarios are only meant for your easy understanding. Spark Architecture Note: Cache memory can be shared between Executors. What does it mean by persisting/caching an RDD? Spark RDD persistence is an optimization technique which saves the result of RDD evaluation Continue Reading
Reading Time: 4 minutes There is a lot of interest in Machine Learning and AI. Ofcourse, a lot of it is still the level 1 of AI . This is when we are thinking about machines acting like humans. Everyone wants to jump on the bandwagon of AI. It is an amazing field and man organizations do not want to be left behind. That said, something which is ignored most of the time is the fuel, the data!
Reading Time: 4 minutes In the previous blog, we talked about Flink’s windows operator, a heart of processing infinite streams. Generally in Flink, after specifying that the stream is keyed or non keyed, the next step is to define a window assigner. The window assigner defines how elements are assigned to windows. Flink provides some useful predefined window assigners like Tumbling windows, Sliding windows, Session windows, Count windows, and Continue Reading
Reading Time: 3 minutes Welcome back folks to this blog series of Spark Structured Streaming. This blog is the continuation of the earlier blog “Understanding Stateful Streaming“. And this blog pertains to Handling Late Arriving Data in Spark Structured Streaming. So let’s get started. Handling Late Data With window aggregates (discussed in the previous blog) Spark automatically takes cares of late data. Every aggregate window is like a bucket Continue Reading
Reading Time: 3 minutes Spark providing us a high-level API – Dataset, which makes it easy to get type safety and securely perform manipulation in a distributed and a local environment without code changes. Also, spark structured streaming, a high-level API for stream processing allows us to stream a particular Dataset which is nothing but a type-safe structured streams. In this blog, we will see how we can create Continue Reading
Reading Time: 4 minutes Welcome back folks to this blog series of Spark Structured Streaming. This blog is the continuation of the earlier blog “Internals of Structured Streaming“. And this blog pertains to Stateful Streaming in Spark Structured Streaming. So let’s get started. Let’s start from the very basic understanding of what is Stateful Stream Processing. But to understand that, let’s first understand what Stateless Stream Processing is. In Continue Reading