How Manage Accelerated Data Freshness by 10x
Trending

How Manage Accelerated Data Freshness by 10x

Success in the mobile advertising industry is achieved by delivering contextual ads in the moment. The faster and more personalized a display ad, the better. Any delay in ad delivery means lost bids, revenue, and ultimately, customers. Manage, a technology company specializing in programmatic mobile marketing and advertising, helps drive mobile application adoption for companies like Uber, Wish, and Amazon. In a single day, Manage generates more than a terabyte of data and processes more than 30 billion bid requests. Manage analyzes this data to know which impressions to buy on behalf of advertisers and uses machine learning models to predict the probability of clicks, app installs, and purchases. Managing Data at Scale At the start, Manage used MySQL to power their underlying statistics pipeline, but quickly ran into scaling issues as data volume grew. Manage then turned to Hadoop coupled with Apache Hive and Kafka for data management, analysis, and real-time data feeds. However, even with this optimized data architecture, Manage found that Hive was slow and caused hours of delay in data pipelines. To meet customer expectations, Manage needed a solution that could deliver fresh data for reporting, while concurrently allowing their analytics team to run ad hoc queries. Kai Sung, Manage CTO and co-founder began the search for a faster database platform, and found SingleStore. The Manage team quickly started prototyping on SingleStore, and was in production within a few months. Streaming Log Data from Apache Kafka Manage uses SingleStore Streamliner, an Apache Spark solution, to first stream log data from Apache Kafka, then store it in the SingleStore columnstore for further processing. As new data arrives, the pipeline de-duplicates data and aggregates it into various summary tables within SingleStore. This data is then made available to an external reporting dashboard and reporting API. With this architecture, manage has a highly scalable, real-time data pipeline that ingests data and summarizes data as fast as it is produced. 10x Faster Data After implementing SingleStore, Manage was able to reduce the delay in data freshness from two hours down to 10 to 15 minutes. With SingleStore, the Manage team now has the ability to run analytics much faster and can react to marketplace changes in the moment.
Read Post
Jumping the Database S-Curve
Trending

Jumping the Database S-Curve

Adaptation and Reinvention Long term success hinges on adaptation and reinvention, especially in our dynamic world where nothing lasts forever. Especially with business, we routinely see the rise and fall of products and companies. The long game mandates change, and the database ecosystem is no different. Today, megatrends of social, mobile, cloud, big data, analytics, IoT, and machine learning place us at a generational intersection. Data drives our digital world and the systems that shepherd it underpin much of our technology infrastructure. But data systems are morphing rapidly, and companies reliant on data infrastructure must keep pace with change. In other words, winning companies will need to jump the database S-Curve. The S-Curve Concept In 2011, Paul Nunes and Tim Breene of the Accenture Institute for High Performance published Jumping the S-curve: How to Beat the Growth Cycle, Get on Top, and Stay There. In the world of innovation, an S-curve explains the common evolution of a successful new technology or product. At first, early adopters provide the momentum behind uptake. A steep ascent follows, as the masses swiftly catch up. Finally, the curve levels off sharply, as the adoption approaches saturation. The book details a common dilemma that too many businesses only manage to a single S-curve of revenue growth, in which a business starts out slowly, grows rapidly until it approaches market saturation, and then levels off. They share the contrast of stand-out, high-performance businesses that manage growth across multiple S-curves. These companies continually find ways to invent new products and services that drive long term revenue and efficiency.
Read Post
Forrester
SingleStore Recognized In

The Forrester WaveTM

Translytical Data
Platforms Q4 2022

451 Research Webcast: In-Memory Computing Market Predictions 2017
Trending

451 Research Webcast: In-Memory Computing Market Predictions 2017

Adoption of in-memory technology solutions is happening faster than ever. This stems from a three pronged demand – first, a greater number of users, analysts, and businesses need access to data. Second, the number of transactions is increasing globally, so companies need faster ingest and analytics engines. Finally, performance inconsistencies are the nail in the coffin for companies competing in the on-demand economy – these enterprises need the responsiveness in-memory technology provides. In addition to these rising demands for real-time data access and analytics, several other factors contribute to in-memory technology adoption as outlined in the following graphic:
Read Post
SQL: The Technology That Never Left Is Back!
Trending

SQL: The Technology That Never Left Is Back!

The Prelude The history of SQL, or Structured Query Language, dates back to 1970, when E.F. Codd, then of IBM Research, published a seminal paper titled, “A Relational Model of Data for Large Shared Data Banks.” Since then, SQL has remained the lingua franca of data processing, helping build the relational database market into a \$36 billion behemoth. The Rise And Fall of NoSQL Starting in 2010, many companies developing datastores tossed SQL out with the bathwater after seeing the challenges in scaling traditional relational databases. A new category of datastores emerged, claiming a new level of scalability and performance. But without SQL they found themselves at a loss for enabling easy analytics. Before long, it was clear that there were many hidden costs of NoSQL. The Comeback That Never Left More recently, many point to a SQL comeback, although the irony is that it never left. In a piece last week on 9 enterprise tech trends for 2017 and beyond, InfoWorld Editor in Chief Eric Knorr notes on trend number 3: The incredible SQL comeback For a few years it seemed like all we did was talk about NoSQL databases like MongoDB or Cassandra. The flexible data modeling and scale-out advantages of these sizzling new solutions were stunning. But guess what? SQL has learned to scale out, too – that is, with products such as ClustrixDB, DeepSQL, SingleStore, and VoltDB, you can simply add commodity nodes rather than bulking up a database server. Plus, such cloud database-as-a-service offerings as Amazon Aurora and Google Cloud SQL make the scale-out problem moot. At the same time, NoSQL databases are bending over backward to offer SQL interoperability. The fact is, if you have a lot of data then you want to be able to analyze it, and the popular analytics tools (not to mention their users) still demand SQL. NoSQL in its crazy varieties still offers tremendous potential, but SQL shows no sign of fading. Everyone predicts some grand unification of SQL and NoSQL. No one knows what practical form that will take. Taking the ‘no’ out of NoSQL In an article, Who took the ‘no’ out of NoSQL?, Matt Asay writes, In the wake of the NoSQL boom, we’re seeing a great database convergence between old and new. Everybody wants to speak SQL because that’s where the primary body of skills reside, given decades of enterprise build-up around SQL queries. The article, interviewing a host of NoSQL specialists, reminds us of the false conventional wisdom that SQL doesn’t scale. Quoting a former MongoDB executive, Assay notes, But the biggest benefit of NoSQL, and the one that RDBMSes have failed to master, is its distributed architecture. The reality is that legacy vendors have had trouble applying scale to their relational databases. However, new companies using modern techniques have shown it is very possible to build scalable SQL systems with distributed architectures. SQL Reigns Supreme in Amazon Web Services There is no better bellwether for technology directions these days than Amazon Web Services. And the statistics shared by AWS tell the story. In 2015, Andy Jassy, CEO of Amazon Web Services, noted that the fastest growing service in AWS was the data warehouse offering Redshift, based on SQL. In 2016, he noted that the fastest growing service in AWS was the database offering Aurora, based on SQL. And one of the newest services, AWS Athena, delivers SQL on S3. This offering is conceptually similar to the wave of ‘SQL as a layer’ solutions developed by Hadoop purveyors so customers could have easy access to unstructured data in HFDS. Lo and behold there were simply not enough MapReduce experts to make sense of the data. AWS has recognized a similar analytics conundrum with S3 growth, which has been so strong it appears that objects stores are becoming the new data lakes. And what do you do when you have lots of data to examine and want to do so easily? You add SQL. SQL Not Fading Away Nick Heudecker, Research Director in the Data and Analytics group at Gartner, put his finger on it recently, Each week brings more SQL into the NoSQL market subsegment. The NoSQL term is less and less useful as a categorization. — Nick Heudecker (@nheudecker) November 8, 2016 Without a doubt the data management industry will continue to debate the wide array of approaches possible with today’s tools. But if we’ve learned one thing over the last 5 years, SQL never left, and it remains as entrenched and important as ever.
Read Post
SingleStore, Tableau, and the Democratization of Data
Trending

SingleStore, Tableau, and the Democratization of Data

“We love fast databases. It makes the experience of interacting with your database that much more enjoyable.” – Tableau Today’s business decisions are about seconds, not minutes. To accommodate this trend, businesses have moved to evidence-backed decision making and widespread data access. Modern business intelligence tools abound, making it easier for the average analyst to create compelling visualizations. In this post, I’ll address how this new mode of thinking about data, the Democratization of Data, comes with two challenges – making data easily available and making it actionable in real time. Making Data Available Companies are migrating to a new model of data distribution – shared access to a centralized database with both historical data and real-time data. This is a far cry from the traditional approach of using many small database instances with stale data, isolated silos, and limited user access. Now, raw data is available to everyone. Employees are empowered to dive into the data, discover new opportunities, or close efficiency gaps in a way that has never been done before. The need for data now coupled with scalability has attracted many developers to in-memory, clustered databases. Making Data Actionable in Real Time Innovations in data visualization have produced powerful, usable tools that afford companies the opportunity to be data-driven. One tool we see embedded across different industries is Tableau. With its mature ecosystem and rich featureset, the business intelligence platform makes it easy for individuals to create compelling, interactive data visualizations. It is a very attractive package for different business levels because it does not require expertise or a degree in visual design or information systems. Any user can create meaningful, actionable dashboards providing views of the business from thirty thousand feet as well as at ground level. But even with a Tableau license in hand, users still face issues – the dashboards are slow or the data is stale. The problem often lies in the database layer. It is imperative that data is up-to-date to be relevant to today’s fast moving business operations. Common issues include:
Read Post
Five Talks for Building Faster Dashboards at Tableau Conference
Trending

Five Talks for Building Faster Dashboards at Tableau Conference

Tableau Conference 2016 kicks-off in Austin, Texas on November 7-11, offering data engineers and business intelligence pros a place to gather and learn how to utilize data to tell a story through analytics and visualizations. SingleStore will be exhibiting the native connectivity and high performance partnership with Tableau using the Tableau SingleStore connector at TC16. Additionally, SingleStore will present a new showcase application, SingleStore Springs: Real-Time Resort Demographic Analysis. This demonstration showcases live customer behavior by demographic across resort properties, visualized with a Tableau dashboard. Attendees will learn how to natively connect Tableau to SingleStore for enhanced dashboard performance and scalability by visiting the SingleStore booth.
Read Post
Real-Time Roadshow Rolls into Phoenix, Arizona
Trending

Real-Time Roadshow Rolls into Phoenix, Arizona

We’re packing our bags and heading to the Southwest to kick off the first ever SingleStore Real-Time Roadshow! Healthcare, education, aerospace, finance, technology, and other industries play a vital role in Phoenix, home to leading corporations like Honeywell, JP Morgan, AIG, American Express, Avnet, and UnitedHealth Group. Businesses in these industries face the constant challenge of keeping up with the high expectations of users and consumers that demand personalized and immediate services. To meet these challenges and elevate their businesses above the competition, industry leaders and data engineers in the Phoenix area embrace real-time applications as the solution. We’re bringing the Real-Time Roadshow to the capital of Arizona, to directly connect with this vibrant community of businesses and developers looking to pursue and learn more about real-time initiatives. Through a series of in-depth, technical sessions and demonstrations, this event provides an opportunity for data professionals and data leaders to investigate the power of real-time solutions. Here’s what you will learn: Forces driving the need for real-time workloadsHow to process and translate millions of data points into actionable insightsHow to drive new revenue and cut operating costs with real-time dataHow predictive analytics gives companies a competitive advantage to anticipate outcomesTop data architectures for real-time analytics and streaming applicationsUse cases and examples from companies building real-time applications Speaking Sessions Driving the On-Demand Economy with Predictive Analytics SingleStore CTO and co-founder Nikita Shamgunov demonstrates how a real-time trinity of technologies — Apache Kafka, Apache Spark, and SingleStore—enables companies to power their businesses with predictive analytics and real-time applications. Real-Time Analytics with SingleStore and Apache Spark SingleStore Engineer Neil Dahlke dives deep into how Pinterest measures real-time user engagement in this technical demonstration that leverages Spark to enrich streaming data with geolocation.
Read Post
A Flying Pig and the Zen of Database Proof of Concepts
Trending

A Flying Pig and the Zen of Database Proof of Concepts

A customer asks potential vendors – I need a pig that can fly. Whoever can get me one, wins the deal. Vendor 1 – The Engineer says “There is no such thing as a flying pig. Do not waste our time. We are not interested.” Vendor 2 – The Geneticist says “I am going to create a new species of pig – one with wings.” He goes to work on a flying pig. He never comes back. Vendor 3 – The Practical One says “Flying pig indeed! Yes, we can get you one.” Vendor 3 takes a drone, makes it look like a pig on the outside and flies it. The approach that Vendor 3 takes is a classic example of redefining the problem or finding a suitable workaround to solve an issue. Executing a database proof of concept has similar themes: There are no perfect databases and no perfect workloads either.Real-world scenarios are for the most part models that have been built over many years. You would be hard-pressed to find a good data model and well written queries.Ideally, you tweak the database to suit the workload. The alternative is attractive, but time-consuming and requires customer buy-in.Time is always short. Innovating workarounds to known limitations and focusing on strengths is important.Solutions that are realistic, simple, and effective work well for the majority. Do not let perfection become the enemy of the good. Winning a database proof of concept requires the following steps: Understand the data and the workload. By peeking into the contents, you gain insight into the actual business use case. By knowing the relations and basic thought process that went into building this model you are in a better position to make changes as needed. This is the hardest step and takes the most effort and time. However, the payoff is well worth the hard work  – winning the customer’s confidence. Load the data. This is by far the easiest part. As you load data, you perform requisites such as gathering stats, choosing the right partition strategy, and indexing. Execute the workload. It gets more interesting here. At this point, you know if your database engine can deliver out of the box or needs tweaks. If you followed step 1, you have the in-depth knowledge to solve problems or make alterations. Unfortunately, most of us who have been in the industry long enough, including myself, have biases and preconceived notions. These biases can hinder your ability to find creative solutions to problems. To quote Bruce Lee – “Empty your cup so that it may be filled; become devoid to gain totality.” An open mind makes us more willing to consider alternatives. By locking ourselves up, we limit our capabilities. Our preconceived limitations define us and box us in. Once you have executed the workload and identified how to meet your customer requirements, the next step is to package up the results and present it. Converting a Successful Proof of Concept to a Deal is the Next Challenge I have done enough proofs of concepts to realize that the winner is rarely the best engineered solution. Economics trump everything, which means cost-effective solutions that meet most of the customer requirements tend to win the deal. To summarize, the ability to innovate, adapt, and be flexible more or less wins the deal. On a closing note – Being a Star Trek fan, everytime I run into a pickle with a proof of concept, I think back to the Kobayashi Maru Training Exercise. From wikipedia (edited) “The Kobayashi Maru is a training exercise in the Star Trek universe designed to test the character of Starfleet Academy cadets in a no-win scenario. The test’s name is to describe a no-win scenario, a test of one’s character or a solution that involves redefining the problem.”
Read Post
SingleStore and Oracle: Better Together
Trending

SingleStore and Oracle: Better Together

Oracle OpenWorld 2016 kicks off on September 18th in San Francisco with ten tracks, including a Data Center track highlighting innovation in databases including SingleStore and Oracle. We built SingleStore to be a flexible ecosystem technology, as exemplified by several features. First, we offer users flexible deployments – whether it’s hybrid cloud, on-premises, VMs, or containers. Second, our connector tools are open source, such as SingleStore Spark Connector and Streamliner, which lets you build real-time pipelines and import from popular datastores like HDFS, S3, and MySQL. And SingleStore is a memory-first engine, designed for concurrent data ingest and analytics. These ingredients make SingleStore a perfect real-time addition to any stack. Several of our customers combine SingleStore with traditional systems, in particular Oracle databases. SingleStore and Oracle can be deployed side by side to enhance scalability, distributed processing, and real-time analytics. Three Ways SingleStore Complements Oracle SingleStore as the Real-Time Analytics Engine Data can be copied from Oracle to SingleStore using a data capture tool, and analytical queries can be performed in real time.
Read Post
Why Role-Based Access Control Is Essential to Database Security
Trending

Why Role-Based Access Control Is Essential to Database Security

As repositories of highly sensitive, confidential, and valuable business data, databases are the crown jewels of every organization. Successful businesses not only supply accurate and timely data, they must protect it as well. Security provides a critical competitive edge for any high functioning database. So database providers must prioritize protecting data in order to gain loyal customers who can trust the systems set in place to properly guard valuable information. In our latest enterprise release, SingleStoreDB Self-Managed 5.1, we added Role-Based Access Control (RBAC) as a powerful tool to protect customer data. With this new security feature, SingleStore customers can now easily scale users and roles to tens of thousands without compromising performance. RBAC provides high scalability and enhanced control over user access to data that perfectly suits intensive workloads like those generated by the Internet of Things. SingleStoreDB Self-Managed 5.1 brings enterprise level security with optimized scale of performance to real-time analytics to prove that customers should never have to sacrifice security for speed. Findings in the 2016 Verizon Data Breach Investigations Report underscore the case for RBAC as a robust shield against unauthorized access to secure data. Of the ten incident classification patterns cited in the report, privilege misuse ranks among the most common sources of data breaches along with web app attacks, denial-of-service, crime-ware, and cyber-espionage. “Incident Classification Patterns”
Read Post
The Changing Face of the Modern CIO
Trending

The Changing Face of the Modern CIO

In 1981 the role of Chief Information Officer (CIO) first breaks onto the scene. Today, thirty five years since genesis, the responsibilities of the CIO have radically changed. The original CIO served as senior executive in an enterprise and was responsible for the information technology and computer systems that supported enterprise goals. However, as today’s business needs rapidly change, so too does the role of a modern CIO. Today, CIOs must adapt or they will get left behind with legacy systems. The modern CIO is expected to take on multiple responsibilities, including: Management of platforms and systems such as data governance, mobility, cloud infrastructureInvesting in security, improved database speed and access, big data analytics, integrationIdentifying trends, threats, and partners that align with business goalsMaking sure a company’s data is clean, accessible, easy to understand, and secureHiring and developing talent CIOs now face many challenges, since IT plays an even more important role in core business strategy than it has in previous years. Management of systems with methods like data analysis and cloud infrastructure allow CIOs to operate with an agile development mentality and be more fluid when it comes to identifying and implementing new business operations as a result. While much of the responsibility of a CIO has shifted away from managing server farms in a closet to managing a cloud, hardware is just as important today with the emergence of the Internet of Things (IoT). CIOs can now utilize IoT to gather valuable data across an entire logistics operation. For example, sensors can be placed on shipping containers and vehicles to analyze trip data, which can lead to designing more efficient shipping routes, resulting in higher cost savings. Instead of being a back-office executive, CIOs must use their influence over new technologies to identify cost-saving opportunities or create additional revenue streams. CIOs, with their knowledge of modern technological trends, become responsible for maintaining their company’s competitive edge. The responsibilities of modern CIOs working in the forefront of technology and information systems are changing rapidly, and those who do not adapt will quickly fall behind. To learn more about the changing landscape of IT and to network with over 100 IT executives, join us at the HMG 2016 CIO Executive Leadership Summit in Denver, CO on September 1, 2016. The speakers for this event include CIOs pushing the boundaries of what’s possible: Rob Dravenstott from DISH Network, Stephen Gold from CVS Health, and Renee Arrington from Pearson Partners International, Inc. At the Summit, the SingleStore team is available to talk about analyzing real-time data to optimize business processes and create new revenue streams. See you there!
Read Post
Seven Talks You Can’t Miss at Gartner Catalyst 2016
Trending

Seven Talks You Can’t Miss at Gartner Catalyst 2016

The 2016 Gartner Catalyst Conference kicks off on August 15-18 in San Diego and will feature over 150 sessions for technical professionals. The conference offers eight in-depth tracks on topics including data centers, data and analytics, security, software, mobility, cloud, digital workplaces, and the Internet of Things. Book an in-person SingleStore demo at Gartner Catalyst ⇒ singlestore.com This year, Gartner has chosen the following theme to guide the experience at Catalyst: Architecting the On-Demand Digital Business. Each track at the show reinforces the importance of embracing modern architecture in order to sense, adapt, and scale businesses for long-lasting impact. There will be valuable opportunities to hear directly from leading analysts, who spent months researching and analyzing industry trends. Here are six sessions and a SingleStore speaking session that we recommend to stay on top of real-time data trends: From Data to Insight to Action: Building a Modern End-to-End Data Architecture Monday, 15 August 2016, 9:30 AM – 10:15 AM Carlie J. Idoine, Research Director, Gartner @CarlieIdoine For years, IT organizations have been dealing with a steady rise in the volume, velocity and variety of data. But now, unprecedented new data sources, such as IoT, are pushing infrastructures to the limit. This session defines a bold strategy and highly-scalable data management architecture built upon technologies such as cloud computing, predictive analytics and machine learning that scale, respond automatically, and unlock enormous business value.
Read Post
Girls Who Code Meet the Women of SingleStore
Trending

Girls Who Code Meet the Women of SingleStore

Last week, SingleStore hosted 20 young women from the Girls Who Code Summer Immersion Program. Over the course of 7 weeks, the group visits some of the Bay Area’s hottest tech companies to gain exposure to the tech industry. These aspiring female engineers, programmers, and future tech leaders are a part of a greater movement to bridge the gender divide in tech workplaces. Girls Who Code is a non-profit organization founded in 2012 dedicated to inspiring young women to pursue education and careers in STEM subjects – Science, Technology, Engineering, and Math. They are building the largest pipeline of female engineers in the United States, and their rapid expansion since genesis reinforces their mission. Beginning with just 20 girls in New York, today Girls Who Code is going strong with 10,000 girls across 42 states.
Read Post
Geospatial Data Meetup with Mapbox
Trending

Geospatial Data Meetup with Mapbox

The global availability of mobile technology means that everyone is connected on the go. For businesses to truly penetrate the consumer market in the age of on-demand products and services, they must find a way to make use of mobile data. Every data point has a place – this is where geospatial data analytics comes into play. The ability to analyze geospatial data and build applications that utilize location will separate market domineers from names left behind. If you know when and how connected consumers interact in different places, can harness data from sensors and IoT, and machine-to-machine communication, you can deliver the most efficient, personalized experience. SingleStore Meetup with Mapbox: Visualize Your World with Geospatial Data Our next meetup spotlights geospatial data analytics. Join us in SoMa, San Francisco to learn about innovative tools for mastering geospatial analytics and building geo-enabled applications. RSVP for our Geospatial Data Meetup on Wednesday, June 29, 2016 ⇒
Read Post