SingleStore Meetups - Year in Review
Trending

SingleStore Meetups - Year in Review

It has been six months since we began hosting meetups regularly at SingleStore. Our office is located in the heart of SoMa, two blocks from the Caltrain station. At the new San Francisco epicenter of tech startups, we want to meet our neighbors and see what other cool technologies are out there! What better way than over cold brews, local pizza and deep tech talks. In honor of the first official meetup of 2016, we decided to take a look back at the meetups of 2015, and share highlights from each one. Hope to see you at 534th St on January 21st for an intimate conversation with Chandan Joarder on Building Real-Time Digital Insight at Macys.com! RSVP for our next meetup: Building Real-Time Digital Insight at Macys.com Without further ado, we present Meetups: A Year in Review.
Read Post
Everyone Deserves Nice Things
Trending

Everyone Deserves Nice Things

Software is eating the world! It’s a data explosion! The Internet is now of Things! Scratch that, even better – it is of Everything! Did Big Data just call out IoT on Twitter? Click here to find out. [1] I kid the Internet. In all seriousness, what a magical time we live in. Moore’s Law means cheap hardware, then next thing you know, Cloud. Internet ubiquity invites the globe to the party. Feats of software engineering that were impossible for nation-states to pull off a decade ago are de rigeur for clever teens. On their mobile phones. I became disillusioned after a few years as a sales engineer for big ticket software products. I talked to so many operations people who spent all their time putting out fires instead of automating and improving. Even worse, it seemed nearly all the actual users were sad about their applications. These amazing modern conveniences were supposed to make our lives easier, not more difficult. The actual outcome was the exact opposite of the expected outcome. Oh irony! These thoughts and feelings led me to join Atlassian in 2008. If you have never heard of that company, I reckon you have at least heard of JIRA, Confluence or HipChat. Here was a group making software that people were using voluntarily. Even tiny teams could implement it without breaking the bank, or gratis if they were building open source. Furthermore, the company was totally focused on software development. Agile was rising to prominence. Git went from non-existence to dominance in an eye blink. Software development was undergoing a sea change in the right direction. This is what brings me to SingleStore. Companies like Google, Facebook, eBay, and Netflix had to feed their databases and infrastructure all the steroids to meet the challenge of true web scale. They, among others, pioneered new ways to ingest and work with previously unimaginable mountains of data. Did you use metric prefixes above giga- in your daily life 10 years ago? Nor did I. Yottabytes of records, anyone? Being able to handle massive data sets, using them to make real-time decisions resulting in delighted customers is the new nice thing that I believe everyone deserves. That is why I am elated to join SingleStore, focusing on the Community Edition. Imagine what you could build better, stronger and faster if the database underneath was ready to handle anything thrown at it. If you are already using SingleStore Community Edition, I am very keen to hear what you’re doing with it. If you have a moment, please take this very short survey. Don’t hesitate to hit me up on our public Slack, email or elsewise. And away we go… [1] – Citations Why Software Is Eating The World Gartner Says Business Intelligence and Analytics Need to Scale Up to Support Explosive Growth in Data Sources The Internet of Things is revolutionising our lives, but standards are a must The Next Big Thing for Tech: The Internet of Everything
Read Post
Forrester
SingleStore Recognized In

The Forrester WaveTM

Translytical Data
Platforms Q4 2022

Find Your IoT Use Case
Trending

Find Your IoT Use Case

As enterprises invest billions of dollars in solutions for the Internet of Things (IoT), business leaders seek compelling IoT use cases that tap into new sources of revenue and maximize operational efficiency. At the same time, advancements in data architectures and in-memory computing continue to fuel the IoT fire, enabling organizations to affordably operate at the speed and scale of IoT. In a recent webcast, Matt Aslett, Research Director at 451 Research, shared use cases across six markets where IoT will have a clear impact: Watch the IoT and Multi-model Data Infrastructure Webcast Recording Industrial Automation Seen as the ‘roots of IoT’, organizations in the industrial automation sector are improving performance and reducing downtimes by adding automation through sensors and making data available online. Utilities When people think about IoT, household utilities like thermostats and smoke alarms often come to mind. Smart versions of these devices not only benefit consumers, but also help utility providers operate efficiently, resulting in savings for all parties. Retail Bringing radio-frequency identification (RFID) online allows retailers to implement just-in-time (JIT) stock-keeping to cut inventory costs. Additionally, retailers can provide better shopping experiences in the form of mobilized point-of-sale systems and contextually relevant offers. Healthcare Connected health equipment allows for real-time health monitoring and alerts that offer improved patient treatment, diagnosis, and awareness. Transportation and Logistics IoT is improving efficiency in transportation and logistics markets by providing benefits like just-in-time manufacturing and delivery, as well as improved customer service. Automotive The automobile industry is improving efficiencies through predictive maintenance and internet enabled fault diagnostics. Another interesting use case comes from capturing driving activity, as insurance companies can better predict driver risk and offer discounts (or premiums) based on data from the road. Finding the Internet of Your Things To take advantage of IoT, Matt notes that it is paramount to identify what top priorities are in your specific case by asking the following questions: Are there ‘things’ within your organization that would benefit from greater connectivity? Can better use be made of the ‘things’ that are already network-ready and the data they create? Are there ‘things’ outside the organization that would benefit from greater connectivity? Is there a way to reap value from your customers, partners, or suppliers’ smart devices that would be mutually beneficial? If you answered ‘yes’ to any of these questions, there is a good chance your organization can improve efficiency with an IoT solution. To get started, watch the recording of the IoT and multi-model data infrastructure webcast and view the slides here:
Read Post
Rapid Scaling for Startups: Lessons from Salesforce and Twitter
Trending

Rapid Scaling for Startups: Lessons from Salesforce and Twitter

RSVP for the SingleStore Meetup: 10 Tips to Rapidly Scale Your Startup with Chris Fry, former SVP of Engineering at Twitter There is nothing more challenging and exciting than experiencing hyper growth at a technology company. As users adopt a technology platform you have to rebuild the technology plane while flying it which can be a harrowing process. I found several approaches to scaling that held true across Salesforce, Twitter, and the startups I now work with on a daily basis. Every company is different but these common problems and solutions should help you on your journey. Team Structure The first problem most early stage companies face is how to grow and structure the team. There are common breaking points around 20 people and 150 where what you were doing ceases to function. What should your teams look like while you are simultaneously tackling growth, structure, and scale? Small teams are the most effective teams, with an ideal size between two and ten people (with smaller being better). Large teams don’t stay in sync while small teams can organically communicate, solve problems and fill in for teammates. You can decompose large teams into autonomous small teams. The best teams can work autonomously. Make sure that teams have all the resources needed to deliver on their goals and know the boundaries of what they should take on. It’s important to create teams that span technology horizontally to create consistency and vertically to attack user focused problems. Teams need structure so they can deliver on their mission without other teams getting in the way. Fast Iteration How do you keep delivering as your company scales? Early in a technology companies startup life many naturally iterate quickly. Unfortunately as companies scale communication and technology issues slow iteration speed. The best thing to focus on is to keep delivering work on a regular quick pace. Creating software is a learning process and each iteration is a chance for the team to learn. Automation also plays a critical role in maintaining a high quality product and should be developed while you are building features. Remember, quality is free – the better software you build, the more you test it, the faster you can change it. Retention and Culture How do you build and maintain a unique engineering culture? To scale an engineering culture you must have one. Discuss it. Set principles. Teach the team to easily remember and articulate key cultural tenets. Put these tenets in writing to bring on new employees and serve as a reference point. Finally, live the culture you set. Culture is a soft topic and if its not lived from the top it is just words on paper. To steal from Dan Pink I would always focus on delivering autonomy, mastery and purpose to each engineer and the engineering team as a whole and build out the cultural practices from there. For example hackweek or letting people pick what team they would work on every quarter. For example, at both Salesforce and Twitter we stressed a culture of experimentation and learning. This helped us focus on product and technology innovation and led directly to better product features for our primary platforms. It’s important to invest in the technical infrastructure to support iteration. At Twitter we used Mesos to scale computation and built out distributed storage to make data available anywhere it was needed. Your infrastructure should allow any engineer to put an idea into production in a day. Learn More Scaling Tips Chris will be presenting “10 Tips to Rapidly Scale Your Startup” on Thursday evening September 24th at SingleStore headquarters in San Francisco. Visit http://www.meetup.com/memsql to register. About Chris Fry Chris Fry was Senior Vice President of Engineering at Twitter, Inc. and before that Senior Vice President, Development, at Salesforce. He is currently an advisor to SingleStore and other startups.
Read Post
5 Big Data Themes – Live from the Show Floor
Trending

5 Big Data Themes – Live from the Show Floor

We spent last week at the Big Data Innovation Summit in Boston. Big data trade shows, particularly those mixed with sophisticated practitioners and people seeking new solutions, are always a perfect opportunity to take a market pulse. Here are the big 5 big data themes we encountered over the course of two days. Real-Time Over Resuscitated Data The action is in real time, and trade show discussions often gravitate to deriving immediate value from real-time data. All of the megatrends apply… social, mobile, IoT, cloud, pushing startups and global companies to operate instantly in a digital,connected world. While there has been some interest in resuscitating data from Hadoop with MapReduce or SQL on Hadoop, those directions are changing. For example, Cloudera recently announced the One Data Platform Initiative, indicating a shift from MapReduce this initiative will enable [Spark] to become the successor to Hadoop’s original MapReduce framework for general Hadoop data processing With Spark’s capabilities for streaming and in-memory processing, we are likely to see a focus on those real-time workflows. This is not to say that Spark won’t be used to explore expansive historical data throughout Hadoop clusters. But judge your own predilection for real-time and historical data. Yes, both are important, but human beings tend to have an insatiable desire for the now. Data Warehousing is Poised for Refresh When the last wave of data warehousing innovation hit mainstream, there was a data M&A spree that started with SAP’s acquisition of Sybase in May 2010. Within 10 months, Greenplum was acquired by EMC, Netezza by IBM, Vertica by HP, and Aster by Teradata. Today, customers are suffering economically with these systems which have become expensive to maintain and do not deliver the instant results companies now expect. Applications like real-time dashboards push conventional data warehousing systems beyond their comfort zone, and companies are seeking alternatives. Getting to ETL Zero If there is a common enemy in the data market, it is ETL, or the Extract, Transform, and Load process. We were reminded of this when Riley Newman from Airbnb mentioned that ETL was like extracting teeth…no one wanted to do it. Ultimately, Riley did find a way to get it done by shifting ETL from a data science to a data engineering function (see final theme below), but I have yet to meet a person who is happy with ETL in their data pipeline. ETL pain is driving new solution categories like Hybrid Transactional and Analytical Processing, or HTAP for short. In HTAP solutions, transactions and analytics converge on a single data set, often enabled by in-memory computing. HTAP capabilities are the forefront of new digital applications with situational awareness and real-time interaction. The Matrix Dashboard is Coming Of course, all of these real-time solutions need dashboards, and dashboards need to be seen. Hiperwall makes a helpful solution to tie multiple monitors together in a single, highly-configurable screen. The dashboards of the future are here!
Read Post
Incumbents and Contenders in the $33B Database Market
Trending

Incumbents and Contenders in the $33B Database Market

The database market continues to surprise those of us who have been in it for a while. After the initial wave of consolidation in the late 1990s and early 2000s, the market has exploded with new entrants: column-stores, document databases, NoSQL, in-memory, graph databases, and more. But who will truly challenge the incumbents for a position in the Top 5 rankings? Oracle, IBM, Microsoft, SAP, and Teradata dominate the \$33B database market. Will it be a NoSQL database? Will it be an open source business model? Ripping and replacing existing databases has been described as heart and brain surgery – at the same time. As such, new entrants must find new use cases to gain traction in the market. In addition, the new use cases must be of enough value to warrant adding a new database to the list of approved vendors. Splitting the world roughly into analytic use cases and operational use cases, we have seen a number of different vendors come and go without seriously disrupting the status quo. Part of the problem appears to be the strategy of using open source as a way to unseat the established vendors. While people seem willing to at least try free software (especially for new use cases), is it a sustainable business model? The open-source market is growing rapidly. However, it is still less than 2% of the total commercial database market. Gartner’s latest numbers show the open-source database market at only $562M, and the total commercial database market at $33B, in 2014. Furthermore, databases are complex, carrying decades of history behind them. To match, and ultimately exceed incumbent offerings, the key is not to have armies of contributors working in individual lanes, but rather to have a focused effort on the features that matter most for today’s critical workloads. This is especially true with the increasing number of mixed analytical and transactional use cases driven by the new real-time, digital economy. In the case of MySQL, the most successful open source database product, less than 1% of the installed base pays anything. Monty Widenius, the creator of MySQL, himself pointed this out in a famous post a couple of years ago. The business model needs to make sense too. The open source world almost never subtracts, it adds: more components, more configurations, more scratches for individual itches. Witness the explosion of projects in the Hadoop ecosystem, and the amount of associated services revenue. A commercial model embeds features into the primary product, efficiently generating value. Today customers seek to consolidate the plethora of extensive data processing tools into fewer multi-model databases. So, it is likely that the next vendor to win a spot in database history will do so by winning on features and workload applicability, and a proven business model with a primary product roadmap. However, there are many compelling aspects of the open source model, with three core value propositions: (1) a functional, free version; (2) open-source at the “edges” of the product; and (3) a vibrant community around the product. How can a commercial vendor balance both worlds? Companies pursuing these strategies include MapR in the Hadoop space. With announcements earlier this summer, SingleStore appears to be heading there too, for operational and analytical databases. They now have a SingleStore Community Edition with unlimited size and scale, and full access to core database features. While the production version of the product requires a paid license, this seems to be a reasonable way to balance the need to support a growing, focused engineering team with core value propositions of an open-source model. So, the question remains: as the database wars heat up and the market gets crowded, who will prevail to lead the industry? With open-source becoming more mainstream, the true contenders will be the vendors that can offer a symmetry between open-source models and new critical workload features.
Read Post
Join SingleStore in Boston for Big Data Innovation Summit
Trending

Join SingleStore in Boston for Big Data Innovation Summit

The Big Data Innovation Summit kicks off in Boston today, uniting some of the biggest data-driven brands, like Nike, Uber, and Airbnb. The conference is an opportunity for industry leaders to share diverse big data initiatives and learn how to approach prominent data challenges. We are exhibiting at booth #23 and will showcase several demos: MemCity, Supercar, and Real-time Analytics for Pinterest. On top of that, we will have games and giveaways at the booth, as well as complimentary download of the latest Forrester Wave report on in-memory database platforms. More on what to expect: Demos MemCity – a simulation that measures and maps the energy consumption across 1.4 million households in a futuristic city, approximately the size of Chicago. MemCity is made possible through a real-time data pipeline built from Apache Kafka, Apache Spark, and SingleStore. Supercar – showcases real-time geospatial intelligence features of SingleStore. The demo is built off a dataset containing the details of 170 million real world taxi rides. Supercar allows users to select a variety of queries to run on the ride data, such as the average trip length during a determined set of time. The real-world application of this is business or traffic analysts can monitor activity across hundreds of thousands of vehicles, and identify critical metrics, like how many rides were served and average trip time.
Read Post
Locate This! The Battle for App-specific Maps
Trending

Locate This! The Battle for App-specific Maps

In early August, a consortium of the largest German automakers including Audi, BMW, and Daimler (Mercedes) purchased Nokia’s Here mapping unit, the largest competitor to Google Maps, for \$3 billion. It is no longer easy to get lost. Quite the opposite, we expect and rely on maps for our most common Internet tasks from basic directions to on-demand transportation, discovering a new restaurant or finding a new friend. And the battle is on between the biggest public and private companies in the world to shore up mapping data and geo-savvy engineering talent. From there, the race continues to deliver the best mapping apps. Recently a story on the talent war among unicorn private companies noted Amid a general scramble for talent, Google, the Internet search company, has undergone specific raids from unicorns for engineers who specialize in crucial technologies like mapping. Wrapping our planet in mobile devices gave birth to a new geographic landscape, one where location meets commerce and maps play a critical role. In addition to automakers like the German consortium having a stake in owning and controlling mapping data and driver user experiences, the largest private companies like Uber and Airbnb depend on maps as an integral part of their applications. That is part of the reason purveyors of custom maps like Mapbox have emerged to handle mapping applications for companies like Foursquare, Pinterest, and Mapquest. Mapbox raised \$52.6 million earlier this summer to continue its quest. Mapbox and many others in the industry have benefitted from the data provided by Open Street Maps, a collection of mapping data free to use under an open license. Of course some of the largest technology companies in the world besides Google maintain their own mapping units including Microsoft (Bing Maps) and Apple Maps. Investment in the Internet of Things combined with mobile device proliferation are creating a perfect storm of geolocation information to be captured and put to use. Much of this will require a analytics infrastructure with geospatial intelligence to realize its value. In a post titled, Add Location to Your Analytics, Gartner notes The Internet of Things (IoT) and digital business will produce an unprecedented amount of location-referenced data, particularly as 25 billion devices become connected by 2020, according to Gartner estimates. and more specifically Dynamic use cases require a significantly different technology that is able to handle the spatial processing and analytics in (near) real time. Of course geospatial solutions have been around for some time, and database providers often partner with the largest private geospatial company, Esri, to bring them to market. In particular, companies developing in-memory databases like SAP and SingleStore have showcased work with Esri. By combing the best in geospatial functions with real-time, in-memory performance, application makers can deliver app-specific maps with unprecedented level of consumer interaction. Google’s balloons and Facebook’s solar powered drones may soon eliminate the dead zones from our planet, perhaps removing the word “lost” from our vocabulary entirely. Similarly, improvements in interior mapping technology guarantee location specific details down to meters. As we head to this near-certain future, maps, and the rich, contextual information they provide, appear to be a secret weapon to delivering breakout application experiences. Download SingleStore today to try a real-time database with native geospatial intelligence at: singlestore.com/free.
Read Post
Gearing Up for Gartner Catalyst in San Diego
Trending

Gearing Up for Gartner Catalyst in San Diego

Gartner Catalyst Conference kicks off next week, Aug 10-13 in San Diego, and we are thrilled to speak and exhibit at the event. Stop by the SingleStore booth #518 to see our latest demos: MemCity, Supercar, and Pinterest.  SingleStore CEO, Eric Frenkiel, and the SingleStore product team will be available at the booth to answer any questions. Book a 1:1 ahead of time with a SingleStore expert here. On top of that, we have a speaking session, happy hour, games and giveaways planned. Here’s what you can expect: Speaking Session: Real-Time Data Pipelines with Kafka, Spark, and Operational Databases 12:45 – 1:05 PM Harbor Ballroom – Tuesday, August 11. What happens when trillions of sensors go online? By 2020, this could be a reality and real-time mobile applications will become integral to capturing, processing, analyzing and serving massive amounts of data from these sensors to millions of users. In this session, Eric Frenkiel, CEO and Co-Founder of SingleStore, will share how-to recipes for building your own real-time data pipeline and applications today with Apache Kafka, memory-optimized Apache Spark, and SingleStore.
Read Post
SingleStore Cited As a Strong Performer by Independent Research Firm
Trending

SingleStore Cited As a Strong Performer by Independent Research Firm

As adoption of in-memory databases grows at a faster and faster pace, IT leaders turn to research firms to find valuable use cases and guidance for purchasing options. We are thrilled to share that SingleStore was among the select companies that Forrester Research invited to participate in its 2015 Forrester Wave™ evaluation. In this evaluation, SingleStore was cited as a strong performer for in-memory database platforms. The report, The Forrester Wave™: In-Memory Database Platforms, Q3 2015, evaluates in-memory databases across three categories: current offering, strategy and market presence. SingleStore received some of its highest scores in the subcategories of scale-out architecture, performance and scale, and product road map. Much of our company’s recent growth and success can be attributed to our strong leadership team and constant iteration from engineering on the product, as we work closely with our customers to solve their big data and analytics challenges. Authors of the Forrester Wave™ write, “today’s in-memory database platforms are changing the way we build and deliver systems of engagement and are transforming the practice of analytics, predictive modeling, and business transaction management.”  At SingleStore, we have championed in-memory computing since day one. When Eric Frenkiel and Nikita Shamgunov left Facebook to start SingleStore, they knew that a real-time, in-memory approach to data processing and analytics was the answer to closing gaps for enterprises using big data. The major benefit of in-memory platforms is the great performance they provide when working with massive volumes of data. We believe the Forrester Wave™ report validates this approach, stating that “the in-memory database market is new but growing fast as more enterprise architecture professionals see in-memory as a way to address their top data management challenges.” There’s another reason why in-memory technology is going to become even more critical in the next several years: predictive applications. Consumers desire personalization from every single application they use across numerous devices. Data is at the crux of predictive analytics, which transcends “context-aware” technology by enabling seamless interaction between customer and app. Companies need instantaneous access to hot data to power these kinds of seamless interactions. Many of our customers are in the throes of building predictive applications, and we get to provide fast, scalable infrastructure to support them. Overall, we are very excited that SingleStore has been recognized by Forrester as a strong performer. The Forrester Wave™ concludes its section on SingleStore with the following line: “customers that are building new transactional and analytical applications that need extreme performance and low-latency access and want a single database platform should look at SingleStore.” We agree.
Read Post
What’s Hot and What’s Not in Working With Data
Trending

What’s Hot and What’s Not in Working With Data

Data is often considered to be the foundation of many global businesses. Data fuels daily operations, from customer transactions to analytics, from operations to communications. So we decided to answer the question: what’s hot and what’s not in working with data today? HOT: Letting your database be a database Databases were constructed to store data. However, sometimes applications are used to store data itself, a result of legacy database limitations. Storing data in an application makes it hard to update that application or to extract value from that data easily. By using a database for its intended function, developers can easily make changes to an  application, without affecting the data. This can save time and money in the long run. NOT: Adding SQL to your NoSQL database, but not calling it SQL SQL, or Structured Query Language, is the lingua franca for working with data and therefore a convenient tool for managing or analyzing data in a relational database. As a result, SQL is experiencing a renaissance. Many NoSQL databases now realize the value of SQL and SQL-like features such as JOINs. They are making hasty attempts to integrate SQL into their offerings, without acknowledging the gaps. HOT: Giving a dash of structure to your data Rather than spending your days wrangling unstructured data, providing some structure to your data upfront improves your ability to put that data to use down the road. Time is of the essence when it comes to most applications, and a little structure goes a long way for enabling real-time applications. Real-time stream processing frameworks like Apache Spark make it possible to add structure to data on the fly, so it is ready to be queried as soon as it lands in the database. HOT: Putting your data in RAM If data is made easily accessible, data locality will increase. Hoping your dataset fits in RAM is not a strategy – a strategic decision to ensure  data is in RAM improves the efficiency of applications that sit on the top of a database. NOT: Calling your database representative for scaling support Instead of calling your traditional database representative for scaling support, just add nodes with more flexible databases to achieve scale out. Adding nodes increases the speed of data processing. For example, with SingleStore you can add additional nodes while the cluster remains online. HOT/NOT: Knowing what is/knowing what was People are interested in staying up to date with the latest data processing techniques. Knowing what works for the present reality is more important that sticking with trends of the past. Real-time analytics will pave the way forward for business – it will reveal the path forward and ensure data does not remain trapped in dark corners. If you work with with databases or data, understanding the hot topics at present will save you from having to do battle with your data as you build applications, scale and innovate for your companies and yourself.
Read Post
Four Reasons Behind the Popularity and Adoption of In-Memory Computing
Trending

Four Reasons Behind the Popularity and Adoption of In-Memory Computing

There is no question that data is infiltrating our world. Recently 451 Research predicted that the Total Data Market is expected to double in size from $60 billion in 2014 to $115 billion in 2019. IDC suggested that Internet of Things (IoT) spending will reach \$1.7 trillion in 2020. and noted, “the real opportunity remains in the enterprise…” And as stated in a recent Gartner blog post, while the three leading independent Hadoop distribution players measure their revenue in 10s of millions, commercial database vendors like Oracle, Microsoft, IBM, SAP and Teradata measure revenues in billions or 10s of billions of dollars in a \$33 billion dollar market. The data market is hot, and in-memory delivers the capabilities companies need to keep up. In the report Market Guide for In-Memory DMBS, published December 2014, analysts Roxane Edjlali, Ehtisham Zaidi, and Donald Feinberg outline the growing importance of in-memory. Four Reasons for the popularity and adoption of In-Memory Declining costs in memory and infrastructure Server main memory (now called server-class memory) is expanding to sizes as high as 32TB and 64TB at an increasingly lower cost, thereby enabling new in-memory technologies such as IMDBMSs, because many applications’ working sets fit entirely into this larger memory. This rapid decline in the infrastructure and memory costs results in significantly better price/performance, making IMDBMS technology very attractive to organizations. Growing importance of high-performance use cases The growing number of high performance, response-time critical and low-latency use cases (such as real-time repricing, power grid rerouting, logistics optimization), which are fast becoming vital for better business insight, require faster database querying, concurrency of access and faster transactional and analytical processing. IMDBMSs provide a potential solution to all these challenging use cases, thereby accelerating its adoption. Improved ROI promise A cluster of small servers running an IMDBMS can support most or all of an organization’s applications, drastically reducing operating costs for cooling, power, floor space and resources for support and maintenance. This will drive a lower total cost of ownership (TCO) over a three- to five-year period and offset the higher total cost of acquisition from more expensive servers. Improved data persistence options Most IMDBMSs now offer features for supporting “data persistence,” that is the ability to survive disruption of their hardware or software environment. Techniques like high availability/disaster recovery (HA/DR) provide durability by replicating data changes from a source database, called the primary database, to a target database, called the standby database. This means that organizations can continue to leverage IMDBMS-enabled analytical and transactional use cases without worrying about prolonged system downtime or losing their critical data to power failures. From Market Guide for In-Memory DBMS, Roxane Edjlali, Ehtisham Zaidi, Donald Feinberg, 9 December 2014 Download the Complete Report If you’d like to read more on the state of the In-Memory DBMS market, download the entire report here.
Read Post
Scaling a Sales Team During Hypergrowth – Meetup with Former EVP of Sales at BOX
Trending

Scaling a Sales Team During Hypergrowth – Meetup with Former EVP of Sales at BOX

The second official SingleStore Meetup takes place next Tuesday, July 28th at 6pm! This time, we will focus on the art and science of building a successful sales team through hypergrowth. Come join us for pizza and libations, as well as an opportunity to glean insights about scaling a sales team at a startup. Our featured guest next week is Jim Herbold, the first sales hire at Box and former Executive Vice President of Global Sales. Leveraging homegrown strategies, Jim grew the Box sales team to 400 people and took revenue from $1 million to $174 million. Jim will share go-to-market strategies, the importance of self-disruption, and sales-driven processes that are key to sales success during hypergrowth. We are thrilled to have Jim join us for a valuable learning experience. The Meetup Agenda: 6:00-7:00pm Happy Hour with heavy hors d’oeuvres 6:30-7:00pm Optional contest: Database Speed Test – Win a Drone! 7:00-7:30pm Main Presentation: Jim Herbold on Scaling Hypergrowth Sales 7:30-8:00pm Q&A, continued Happy Hour Feel free to bring your laptop and participate in the optional Database Speed Test for your chance to win an Estes ProtoX drone!  Save any questions for our Q&A session directly following the presentation. RSVP: http://www.meetup.com/SingleStore. Located in the heart of San Francisco’s bustling South of Market (SOMA) neighborhood, SingleStore Meetups are a fun way to meet and interact with neighboring technology startups and enthusiasts in Silicon Valley.
Read Post
Join SingleStore at the Data Science Summit in San Francisco
Trending

Join SingleStore at the Data Science Summit in San Francisco

We are excited to exhibit at the Data Science Summit on Monday, July 20, in San Francisco. Stop by the SingleStore booth to learn about our MemCity demo, pickup a cool t-shirt, and play our reaction test game to win an Estes ProtoX Mini Drone. About the Data Science Summit The Data Science Summit is a non-profit event that connects researchers and data scientists from academia and industry to discuss the art of data science, machine learning, and predictive applications. What We Have in Store for the Event Visit the SingleStore booth to learn about: Our latest demo, MemCity, that leverages Kafka, Spark, and SingleStore to process and analyze data from various energy devices found in homes, all measured in real time.How in-memory computing can combat latencies in the enterprise, such as batch loading and query execution latency.How SingleStore enables data analyst to get real-time insights using SQL.SingleStore Community Edition – a free downloadable version of SingleStore that comes without limitations on capacity, cluster size, or time. Recommended Sessions How Comcast uses Data Science to Improve the Customer Experience Monday, July 20, 10:50am – Salon 9 Comcast Labs manager, Dr. Jan Neumann, will discuss how Comcast improves the visible parts of the user experience by powering the personalized content discovery algorithms and voice interface on the X1 set top boxes. Bonus: Learn how the Comcast VIPER team is using SingleStore for real-time stream processing. What’s New in the Berkeley Data Analytics Stack Monday, July 20, 1:20pm – Salon 9 In this talk, Prof. Mike Franklin of the Berkeley AMPLab will give a quick overview of BDAS (pronounced “badass”) and then describe several newer BDAS components including: the KeystoneML machine learning pipeline framework, the Velox model serving layer, and the SampleClean/AMPCrowd components for human-in-the-loop data cleaning and machine learning.
Read Post