First SingleStore Community Meetup in San Francisco
Trending

First SingleStore Community Meetup in San Francisco

Join us this Wednesday, July 15th at 6pm for the first official SingleStore Meetup! Our headquarters are located in the South of Market (SOMA) neighborhood of San Francisco, a veritable hotbed of technology infrastructure startups. We want to foster a community of collaboration and create a space for our peers to both learn and share. This meetup represents a gathering of database architects, programmers, cloud enthusiasts and other technology professionals in the Bay Area. What’s in store for our meetup: 6:00-7:00pm – Happy Hour with heavy hors d’oeuvres 7:00-7:30pm – Carlos to present on Community Edition and performance on Amazon’s new M4 instance 7:30-8:00pm – Q&A and more Happy Hour Carlos Bueno, Principal Product Manager at SingleStore, will demonstrate how to spin up your very own cluster with SingleStore Community Edition, free forever with unlimited scale and capacity. He will also demo the recently launched database speed test to evaluate cluster performance, while exploring deployments on M4 instances, the latest offering from Amazon Web Services. Feel free to bring your laptop and participate in the hands-on portion of the event. Save any questions for our Q&A session directly following the presentation. RSVP: http://www.meetup.com/SingleStore. We look forward to seeing you there!
Read Post
Join SingleStore at the Inaugural In-Memory Computing Summit
Trending

Join SingleStore at the Inaugural In-Memory Computing Summit

The inaugural In-Memory Computing Summit begins next week, June 29-30, and we are thrilled to be a part of it. From speaking sessions on Spark and customer use cases, to games and giveaways, you will not want to miss the action. Visit us at booth #4 to pick up our brand new t-shirt, and learn how in-memory computing can bring peak performance to new or existing applications. SingleStore Speaking Sessions From Spark to Ignition: Fueling Your Business on Real-Time Analytics Monday, June 29 at 10:40am – Eric Frenkiel, SingleStore CEO and Co-Founder Real-time is the next phase of big data. For the modern enterprise, processing and analyzing large volumes of data quickly is integral to success. SingleStore and Spark share design philosophies like memory-optimized data processing and scale-out on commodity hardware that enable enterprises to build real-time time data pipelines with operational data always online. This session shares hands-on ‘how-to’ recipes to build workflows around Apache Spark, with detailed production examples covered. A Hitchhiker’s Guide to the Startup Data Science Platform Monday, June 29 at 4:40pm – David Abercrombie, Principal Data Analytics Engineer at Tapjoy Join David Abercrombie for a session chronicling the growth of the Tapjoy data science team as a lens for examining the infrastructure and technology most critical to their success. This includes implementations and integrations of Hadoop, Spark, NoSQL and SingleStore, which enable Tapjoy to turn sophisticated algorithms into serviceable, data-driven products. Resources to Gear Up for the Event Gartner Market Guide for In-Memory DBMS This complimentary guide from Gartner provides a comprehensive overview of the in-memory database landscape. Download it to learn about three major use cases for in-memory databases and Gartner’s recommendations for evaluation and effective use. Download the Guide The Modern Database Landscape Download this white paper to learn how in-memory computing enables transactions and analytics to be processed in a single system and how to leverage converged processing to save time and cut costs, while providing real-time insights that facilitate data-driven decision-making. Download the White Paper Games and Giveaways Drop by the SingleStore booth #4 to get your free, super-soft t-shirt and play our reaction test game for a chance to win an Estes ProtoX Mini Drone.
Read Post
Forrester
SingleStore Recognized In

The Forrester WaveTM

Translytical Data
Platforms Q4 2022

Top 5 Questions Answered at Spark Summit
Trending

Top 5 Questions Answered at Spark Summit

The SingleStore team enjoyed sponsoring and attending Spark Summit last week, where we spoke with hundreds of developers, data scientists, and architects all getting a better handle on modern data processing technologies like Spark and SingleStore. After a couple of days on the expo floor, I noticed several common questions. Below are some of the most frequent questions and answers exchanged in the SingleStore booth. 1. When should I use SingleStore? SingleStore shines in use cases requiring analytics on a changing data set. The legacy data processing model, which creates separate siloes for transactions and analytics, prevents updated data from propagating to reports and dashboards until the nightly or weekly ETL job begins. Serving analytics from a real-time operational database means reports and dashboards are accurate up to the last event, not last week. That said, SingleStore is a relational database and you can use it to build whatever application you want! In practice, many customers choose SingleStore because it is the only solution able to handle concurrent ingest and query execution for analyzing changing datasets in real-time. 2. What does SingleStore have to do with Spark? Short answer: you need to persist Spark data somewhere, whether in SingleStore or in another data store. Choosing SingleStore provides several benefits including: In-memory storage and data serving for maximum performanceStructured database schema and indexes for fast lookups and query executionA connector that parallelizes data transfer and processing for high throughput Longer answer: There are two main use cases for Spark and SingleStore: Load data through Spark into SingleStore, transforming and enriching data on the fly in SparkIn this scenario, data is structured and ready to be queried as soon as it lands in SingleStore, enabling applications like dashboards and interactive analytics on real-time data. We demonstrated this “real-time pipeline” at Spark Summit, processing and analyzing real-time energy consumption data from tens of millions of devices and appliances.Leverage the Spark DataFrame API for analytics beyond SQL using data from SingleStoreOne of the best features of Spark is the expressive but concise programming interface. In addition to enabling SingleStore users to express iterative computations, it gives them access to the many libraries that run on the Spark execution engine. The SingleStore Spark connector is optimized to push computation into SingleStore to minimize data transfer and to take advantage of the SingleStore optimizer and indexing. 3. What’s the difference between SingleStore and Spark SQL? There are several differences: Spark is a data processing framework, not a database, and does not natively support persistent storage. SingleStore is a database that stores data in memory and writes logs and full database snapshots to disk for durability.Spark treats datasets (RDDs) as immutable – there is currently no concept of an INSERT, UPDATE, or DELETE. You could express these concepts as a transformation, but this operation returns a new RDD rather than updating the dataset in place. In contrast, SingleStore is an operational database with full transactional semantics.SingleStore supports updatable relational database indexes. The closest analogue in Spark is IndexRDD, which is currently under development, and provides updateable key/value indexes within a single thread.In addition to providing a SQL server, the Spark DataFrame library is a general purpose library for manipulating structured data. 4. How do SingleStore and Spark interact with one another? The SingleStore Spark Connector is an open source tool available on the SingleStore GitHub page. Under the hood, the connector creates a mapping between SingleStore database partitions and Spark RDD partitions. It also takes advantage of both systems’ distributed architectures to load data in parallel. The connector comes with a small library that includes the SingleStoreRDD class, allowing the user to create an RDD from the result of a SQL query in SingleStore. SingleStoreRDD also comes with a method called saveToSingleStore(), which makes it easy to write data to SingleStore after processing. 5. Can I have one of those cool t-shirts? (Of course!) What does the design mean?
Read Post
Join SingleStore at Spark Summit
Trending

Join SingleStore at Spark Summit

We’re excited to be at Spark Summit next week in our hometown of San Francisco. If you’re attending, stop by booth K6 for games and giveaways, and checkout our latest demo that showcases how organizations are using SingleStore and Spark for real-time analytics. Meet with us at Spark Summit Schedule an in-person meeting or demo at the event. Reserve a Time → SingleStore and Spark Highlights Over the past year, we’ve been working closely with our customers and the Spark community to build real-time applications powered by Spark and SingleStore. Highlights include: Real-Time Analytics at Pinterest with Spark and SingleStore Learn how Pinterest built a real-time data pipeline with SingleStore and Spark Streaming to achieve higher performance event logging, reliable log transport, and faster query execution on real-time data. Read the full post on the Pinterest Engineering Blog SingleStore Spark Connector The SingleStore Spark Connector provides everything you need to start using Spark and SingleStore together. It comes with a number of optimizations, such as reading data out of SingleStore in parallel and making sure that Spark colocates data in its cluster. Download now on GitHub Building Real-Time Platforms with Apache Spark Watch our session from Strata+Hadoop World to learn how hybrid transactional and analytical data processing capabilities, integrated with Apache Spark, enable businesses to build real-time platforms for applications.
Read Post
What We Talk About When We Talk About Real-Time
Trending

What We Talk About When We Talk About Real-Time

The phrase “real-time,” like love, means different things to different people.At its most basic, the term implies near simultaneity. However, the amount of time that constitutes the “real-time window” differs across industries, professions, and even organizations. Definitions vary and the term is so often (ab)used by marketers and analysts, that some dismiss “real-time” as a meaningless buzzword.However, there is an important distinction between “real-time” and “what we have now but faster.” A real-time system is not just faster, but fast enough to cross a performance threshold such that your business can reproducibly gain net new value.This abstract definition is likely too general to assuage the real-time absolutists. However, there is no way to select a single number value definition of “real-time” that works for all use cases. Rather, it’s better to talk about “real-time” as a heuristic and allow stakeholders to establish conventions tailored to their own idiosyncratic business problems.Instead of claiming real-time means X seconds, this article will describe two classes of real-time applications and their performance requirements.Machines Acting in Real-TimeOne class of real-time applications is where machines programmatically make data-driven decisions. The ability to automate data-driven decisions is especially valuable for applications where the volumes of data or demanding service level agreements (SLAs) make it impossible for the decision to hinge on human input.Example: Real-Time BiddingTake the example of digital advertising, where real-time user targeting and real-time bidding on web traffic have revolutionized the industry. Selecting a display ad or choosing whether to buy traffic based on the viewer’s demographic and browsing information can boost click-through and conversion rates. Clearly, the process of choosing ads and whether to buy traffic must be done programmatically – the volume of traffic on a busy site is too large and the decisions must be made too quickly for it to be done by humans.For this application, “real-time” means roughly “before the web page loads in the browser window.” This brief lag period is essentially free computation time while the viewer waits a fraction of a second for the page to load.This definition of real-time may not be numerically absolute, but it is well-defined. While businesses implementing real-time advertising platforms will often impose particular SLAs (“this database call must return in x milliseconds”), these time values are just heuristics representing an acceptable execution time. In practice, there may not be a hard and fast cutoff time beyond which it “doesn’t work.” The business may determine that clicks tail off at some rate as page load time lengthens, and shrinking average load time causes an increase in clickthrough rate.Example load times and clickthrough rateTime to load (s)Clickthrough rate.23.42.61.8.51.0.1This real-time window is not a discreet interval that guarantees uniform outcomes—rather, it’s defined probabilistically. Every time a user views a web page with a display ad, we know they will click on an ad with some probability (i.e. clickthrough rate). If the page or display ad loads slowly, the viewer is more likely to overlook the ad, or navigate to a different page entirely, decreasing the average clickthrough rate. If the page and ad load quickly, the viewer is more likely to click on the ad, increasing the average clickthrough rate.While this definition of real-time allows a range of response times, in practice the range tails off quickly. For instance, the clickthrough rate at 2 seconds load time is likely near 0%. This is what I mean when I say a real-time application is one that is “fast enough” to capture some untapped value. The “real-time” approach of dynamically choosing display ads or bidding on traffic based on user profile information is fundamentally different from the legacy approach of statically serving ads regardless of viewer profile. However, real-time digital advertising is only worth implementing if it can be done fast enough to lift intended user behavior.There are many applications for machines programmatically making decisions in real time, not just digital advertising. Applications include fraud detection, geo-fencing, and load-balancing a datacenter or CDN, to name a few.Humans Acting on Real-Time DataThe other class of real-time applications is where humans respond to events and make data-driven decisions in real time.Despite strides in artificial intelligence and predictive analytics, many business problems still require a human touch. Often, solutions require synthesizing information about a complex system or responding to anomalous events. While these problems require the critical thinking of a human, they are still data-driven. Providing better information sooner lets humans reach a solution faster.Example: Data Center ManagementA good example of this type of problem is managing complex systems like data centers. Some of the management can be automated, but ultimately humans need to respond to unexpected failure scenarios. For online retailers and service providers, uptime directly correlates with revenue.With or without a real-time monitoring system in place, data center administrators can access live server data through remote access and systems monitoring utilities. But standard systems monitoring tools can only provide so much information. The challenge lies in collecting, summarizing, and understanding the flood of machine-generated data flowing from hundreds or thousands of severs. Doing this in real-time has some demanding requirements:Granular logging of network traffic, memory usage, and other important system metricInteractive query access to both recent and historical log and performance data so administrators can spot anomalies and act on themThe ability to generate statistics reporting on recent machine data without tying up the database and blocking new data from being writtenThe third requirement is arguably the hardest, and the one on which the definition of real-time hinges. It entails processing and recording all machine data (an operational or OLTP workload) and aggregating the data into useful performance statistics (a reporting or OLAP workload). The reporting queries must execute quickly without blocking the inflow of new data.Once again, there is no hard and fast rule for what constitutes a real-time window. It could be a second or a few seconds. Rather, the distinguishing feature of a real-time monitoring system is the ability to converge live data with historical data, and to interactively analyze and report on them together. The technical challenge is not simply collecting data, but how quickly can you extract actionable information from it.There are many applications for real-time monitoring beyond data center administration. It can be applied to understand and optimize complex dynamic systems such as an airline or shipping network. It can also be used for financial applications like position tracking and risk management.Moving to Real-Time Data PipelinesWhile the specific numerical values associated with “real-time” may vary between organizations, many enterprises are deploying similar data processing architectures to power data-driven applications. In particular, enterprises are replacing legacy architectures, that separate operational data processing from analytical data processing, with real-time data pipelines that can ingest, serve, and query data simultaneously. SingleStore forms the core of many such pipelines, often used in conjunction with Apache Kafka and Spark Streaming, for distributed, fault-tolerant, and high throughput data processing.
Read Post
Join SingleStore at ad:tech San Francisco
Trending

Join SingleStore at ad:tech San Francisco

Read Post
Tech Field Day Reception May 13th
Trending1 min Read

Tech Field Day Reception May 13th

Tech Field Day is coming to  San Francisco next week with a focus on data, and that means time for a party! On Wednesday, May 13th, at…
Read Post
Filling the Gap Between HANA and Hadoop
Trending

Filling the Gap Between HANA and Hadoop

Takeaways from the Gartner Business Intelligence and Analytics Summit Last week, SingleStore had the opportunity to participate in the Gartner Business Intelligence and Analytics Summit in Las Vegas. It was a fun chance to talk to hundreds of analytics users about their current challenges and future plans. As an in-memory database company, we fielded questions on both sides of the analytics spectrum. Some attendees were curious about how we compared with SAP HANA, an in-memory offering at the high-end of the solution spectrum. Others wanted to know how we integrated with Hadoop, the scale-out approach to storing and batch processing large data sets. And in the span of a few days and many conversations, the gap between these offerings became clear. What also became clear is the market appetite for a solution. Hardly Accessible Not Affordable While HANA does offer a set of in-memory analytical capabilities primarily optimized for the emerging SAP S4/HANA Suite, it remains at such upper echelons of the enterprise IT pyramid that it is rarely accessible across an organization. Part of this stems from the length and complexity of HANA implementations and deployments. Its top of the line price and mandated hardware configurations also mean that in-memory capabilities via HANA are simply not affordable for a broader set of needs in a company. Hanging with Hadoop On the other side of the spectrum lies Hadoop, a foundational big data engine, but often akin to a large repository of log and event data. Part of Hadoop’s rise has been the Hadoop Distributed File System (HDFS) which allowed for cheap and deep storage on commodity hardware. MapReduce, the processing framework atop HDFS, powered the first wave of big data, but as the world moves towards real-time, batch processing remains helpful but rarely sufficient for a modern enterprise. In-Memory Speeds and Distributed Scale Between these ends of the spectrum lies an opportunity to deliver in-memory capabilities with an architecture on distributed, commodity hardware accessible to all. The computing theme of this century is piles of smaller servers or cloud instances, directed by clever new software, relentlessly overtaking use-cases that were previously the domain of big iron. Hadoop proved that “big data” doesn’t mean “big iron.” The trend now continues with in-memory. Moving To Converged Transactions and Analytics At the heart of the in-memory shift is the convergence of both transactions and analytics into a single system, something Gartner refers to as Hybrid transactional/analytical processing (HTAP). In-memory capabilities make HTAP possible. But data growth means the need to scale. Easily adding servers or cloud instances to a distributed solution lets companies meet capacity increases and store their highest value, most active data in memory. But an all-memory, all-the-time solution might not be right for everyone. That is where combining all-memory and disk-based stores within a single system fits. A tiered architecture provides infrastructure consolidation and low cost expansion high value, less active data. Finally, ecosystem integration makes data pipelines simple, whether that includes loading directly from HDFS or Amazon S3, running a high-performance connector to Apache Spark, or just building upon a foundational programming language like SQL. SQL-based solutions can provide immediate utility across large parts of enterprise organizations. The familiarity and ubiquity of the programming language means access to real-time data via SQL becomes a fast path to real-time dashboards, real-time applications, and an immediate impact. Related Links: To learn more about How HTAP Remedies the Four Drawbacks of Traditional Systems here Want to learn more about in-memory databases and opportunities with HTAP? – Take a look at the recent Gartner report here. If you’re interested in test driving an in-memory database that offers the full benefits of HTAP, give SingleStore a try for free, or give us a ring at (855) 463-6775.
Read Post
Real-Time Geospatial Intelligence with Supercar
Trending

Real-Time Geospatial Intelligence with Supercar

Today, SingleStore is showcasing a brand new demonstration of real-time geospatial location intelligence at the Gartner Business Intelligence and Analytics Summit in Las Vegas. The demonstration, titled Supercar, makes use of a dataset containing the details of 170 million real world taxi rides. By sampling this dataset and creating real-time records while simultaneously querying the data, Supercar simulates the ability to monitor and derive insights across hundreds of thousands of objects on the go.
Read Post
In The End We Seek Structure
Trending

In The End We Seek Structure

In Short: A range of assumptions led to a boom in NoSQL solutions, but in the end, SQL and relational models find their way back as a critical part of data management. In the End We Seek Structure. Why SQL and relational models are back as a critical part of data management – Click to Tweet Background By the mid 2000s, 10 years into the Netscape-inspired mainstream Internet, webscale workloads were pushing the limits of conventional databases. Traditional solutions could not keep up with a myriad of Internet users simultaneously accessing the same application and database. At the time, many websites used relational databases like MySQL, SQL Server from Microsoft, or Oracle. Each of these databases relied on a relational model using SQL, the Structured Query Language, which emerged nearly 40 years ago and remains the lingua franca of data management. Genesis of NoSQL Scaling solutions is hard, and in particular scaling a relational, SQL database proved particularly challenging, in part leading to the emergence of the NoSQL movement. FIGURE 1: Interest in NoSQL 2009 – 2015 Source: Google Trends
Read Post
SingleStore at Gartner Business Analytics and Intelligence Summit
Trending

SingleStore at Gartner Business Analytics and Intelligence Summit

We are thrilled to be in Las Vegas this week for the Gartner Business Analytics and Intelligence Summit. We will be at booth #119 and we have a ton in store for the event, including games and giveaways, happy hour for attendees, and a featured session from SingleStore CEO, Eric Frenkiel. We will also be showcasing our new geospatial capabilities, and a demo of how Pinterest is using SingleStore and Spark for real-time analytics. Free Gartner Report: Market Guide for In-Memory DBMS See the latest developments and use cases for in-memory databases. Download the Report Here → From the report… “The growing number of high performance, response-time critical and low-latency use cases (such as real-time repricing, power grid rerouting, logistics optimization), which are fast becoming vital for better business insight, require faster database querying, concurrency of access and faster transactional and analytical processing. IMDBMSs provide a potential solution to all these challenging use cases, thereby accelerating its adoption.” Don’t Miss the SingleStore Featured Session From Spark to Ignition: Fueling Your Business on Real-Time Analytics SingleStore CEO and Founder, Eric Frenkiel, will discuss how moving from batch-oriented data silos to real-time pipelines means replacing batch processes with online datasets that can be modified and queried concurrently. This session will cover use cases and customer deployments of Hybrid Transaction/Analytic Processing (HTAP) using SingleStore and Spark. Session Details Speaker: Eric Frenkiel, SingleStore CEO and FounderData and Time: 12:30pm–12:50pm Monday, 3/30/2015Location: Theater A, Forum Ballroom Join SingleStore on Monday Night for Happy Hour We will be hosting a happy hour at Carmine’s in The Form Shops at Caesars on Monday night at 8:00PM. ALTER TABLE TINIs and heavy hors d’oeuvres will be served. Stop by and meet with SingleStore CEO, Eric Frenkiel and CMO, Gary Orenstein. More details here. Suggested Sessions We have handpicked a few sessions that you don’t want to miss. Do We Still Need a Data Warehouse? Speaker: Donald Feinberg VP Distinguished Analyst 30 March 2015 2:00 PM to 2:45 PM For more than a decade, the data warehouse has been the architectural foundation of most BI and analytic activity. However, various trends (in-memory, Hadoop, big data and the Internet of Things) have compelled many to ask whether the data warehouse is still needed. This session provides guidance on how to craft a more modern strategy for data warehousing. Will Hadoop Jump the Spark? Speaker: Merv Adrian Research VP 31 March 2015 2:00 PM to 2:45 PM The Hadoop stack continues its dramatic transformation. The emergence of Apache Spark, suitable for many parts of your analytic portfolio, will rewrite the rules, but its readiness and maturity are in question. The DBMS Dilemma: Choosing the Right DBMS For The Digital Business Speaker: Donald Feinberg VP Distinguished Analyst 31 March 2015 2:00 PM to 2:45 PM As your organization moves into the digital business era, the DBMS needs to support not only new information types but also the new transactions and analytics required for the future. The DBMS as we know it is changing. This session will explore the new information types, new transaction types and the technology necessary to support this. Games and Giveaways
Read Post
SingleStore at Spark Summit East
Trending

SingleStore at Spark Summit East

We are happy to be in New York City this week for Spark Summit East. We will be sharing more about our new geospatial capabilities, as well as the work with Esri to showcase the power of SingleStore geospatial features in conjunction with Apache Spark. Last week we shared the preliminary release of SingleStore geospatial features introduced at the Esri Developer Summit in Palm Springs. You can read more about the live demonstration showcased at the summit here. The demonstration uses the “Taxistats” dataset: a compilation of 170 million real-world NYC taxi rides. It includes GPS coordinates of the pickup and dropoff, distance, and travel time. SingleStore is coupled with the new version of Esri’s ArcGIS Server, which has a new feature to translate ArcGIS queries into external database queries. From there we generate heatmaps from the raw data in sub-second time. This week we launched the official news release of SingleStore geospatial capabilities. By integrating geospatial functions, SingleStore enables enterprises to achieve greater database efficiency with a single database that is in-memory, linearly scalable and supports the full rage of relational SQL and geospatial functions. With SingleStore, geospatial data no longer remains separate and becomes just another data type with lock-free capabilities and powerful manipulation functions.
Read Post
SingleStore at the AMP Lab
Trending

SingleStore at the AMP Lab

Please join us next week as two members of the SingleStore engineering team present at the AMPLab at Berkeley on Wednesday March 11th from 12:00pm to 1:00pm. AMP SEMINAR Ankur Goyal and Anders Papitto, SingleStore, A Distributed In-Memory SQL Database Wednesday 3/11, Noon, 405 Soda Hall, Berkeley Talk Abstract This talk will cover the major architectural design decisions with discussion on specific technical details as well as the motivation behind the big decisions. We will cover lockfree, code generation, durability/replication, distributed query execution, and clustering in SingleStore. We will then discuss some of the new directions for the product, including some ideas on leveraging Spark. Speakers Ankur Goyal is the Director of Engineering at SingleStore. At SingleStore he has focused on distributed query execution and clustering, but has touched most of the engine. His areas of interest are distributed systems, compilers, and operating systems. Ankur studied computer science at Carnegie Mellon University and worked on distributed data processing at Microsoft before SingleStore. Anders Papitto is an engineer at SingleStore, where he has worked on distributed query execution, column store storage and query execution, and various other components. He joined SingleStore shortly before completing his undergraduate studies at UC Berkeley. About the AMPLab AMP: ALGORITHMS MACHINES PEOPLE TURNING UP THE VOLUME ON BIG DATA Working at the intersection of three massive trends: powerful machine learning, cloud computing, and crowdsourcing, the AMPLab is integrating Algorithms, Machines, and People to make sense of Big Data. We are creating a new generation of analytics tools to answer deep questions over dirty and heterogeneous data by extending and fusing machine learning, warehouse-scale computing and human computation. We validate these ideas on real-world problems including participatory sensing, urban planning, and personalized medicine with our application and industrial partners.
Read Post
Video: The State of In-Memory and Apache Spark
Trending

Video: The State of In-Memory and Apache Spark

Strata+Hadoop World was full of activity for SingleStore. Our keynote explained why real-time is the next phase for big data. We showcased a live application with Pinterest where they combine Spark and SingleStore to ingest and analyze real-time data. And we gave away dozens of prizes to Strata+Hadoop attendees who proved their latency crushing skills in our Query Kong game. During the event, Mike Hendrickson of O’Reilly Media sat down with SingleStore CEO Eric Frenkiel to discuss: The state of in-memory computing and where it will be in a yearWhat Spark brings to in-memory computingIndustries and use cases that are best suited for Spark Get The SingleStore Spark Connector Guide The 79 page guide covers how to design, build, and deploy Spark applications using the SingleStore Spark Connector. Inside, you will find code samples to help you get started and performance recommendations for your production-ready Apache Spark and SingleStore implementations. Download Here Watch the video in full here:
Read Post