In this 25th episode of the TECHunplugged Podcast we welcome Pierluca Chiodelli, VP of Product Management Storage Portfolio & Customer Operations at Dell EMC. This episode was recorded live at Dell Technologies World 2019 in Las Vegas.
Podcast co-host Max Mortillaro (@darkkavenger) talks with Pierluca about his new role, the “Data Story” of Dell Technologies, what we can expect from the Dell Technologies storage portfolio (especially in terms of midrange storage portfolio consolidation), as well as the growing role of VMware in the entire Dell Technologies ecosystem.
Pierluca Chiodelli is currently the VP of Product Management for Storage Portfolio and Customer Operations. Pierluca’s organization oversees the Portfolio Strategy for the Storage BU and is leading the technical resources across the major storage products.
The teams are responsible for creating a single storage portfolio vision and drive solutions to enable end users and service providers to transform their operations and deliver information technology as a service.
Pierluca has been with DellEMC since 1999, with experience in field support and core engineering across Europe and the Americas. Prior to joining EMC, he worked at Data General and as a consultant for HP Corporation. Pierluca holds one degree in Chemical Engineering and a second one in Information Technology.
01:25 What are Dell Technologies plans to translate their « Data » story into palpable solutions, and how they will bake this into their portfolio
02:40 Customers can buy services in the cloud that have similar capabilities as on-premises products
03:00 The storage portfolio should cover the edge, core, and cloud. Beyond coverage, the necessity to integrate automation in all products
04:00 From a Dell Technologies perspective, VMware Cloud Foundation is the interconnect hub for the entire DT portfolio
04:57 Is Dell moving to a data-centric approach to storage, and what about data mobility?
07:30 Data migration and automation integrations
08:08 Storage is the foundation of everything – are there plans for Dell Technologies to eventually offer a one-stop-shop / portal to consume data and cloud services?
09:40 Unity Cloud Edition – an SDS implementation of Unity for the cloud
11:17 VMware keeps being mentioned in a storage discussion, what is the role of VMware in the Dell Technologies -and especially in the Dell EMC- future?
14:01 Some insights around the rationalisation of Dell EMC’s very broad product portfolio
16:05 The importance of maintaining the installed base, and having a loyal relationship with customers
17:30 More than a portfolio rationalisation, a rationalisation of how platforms and solutions are built
18:20 Final comments: rationalisation is a journey
In this 24th episode of the TECHunplugged Podcast we welcome Walter Hinton, Head of Corporate and Product Marketing at Pavilion Data. This episode was recorded live at Dell Technologies World 2019 in Las Vegas.
Podcast co-host Max Mortillaro (@darkkavenger) talks with Walt about the challenges with traditional all-flash infrastructures and the specific needs of modern scale-out applications. Walt covers extensively those topic, then goes on to explain why NVMe-based systems need purpose-built architectures, and how Pavilion Data is a great fit for scale-out / massively parallel applications or filesystems.
About Pavilion Data
Pavilion Data is the industry’s leading NVMe-oF Storage Array. It is a true end-to-end NVMe solution, from the host all the way down to the media. The Array is 100% standards compliant with zero host-side presence and was designed for the modern massively parallel clustered web and analytics applications.
Pavilion’s Storage Array delivers the next generation of composable disaggregated infrastructure (CDI) by separating storage from computing resources to allow them to scale and grow independently. In today’s large-scale environments it allows customers to become more agile by delivering the exact amount of composable resources at any given time.
Walt is responsible for Corporate and Product Marketing. He brings a deep technical background along with proven success in building marketing teams and programs. Over a 25-year career in data storage, Walt has helped build successful startups like McDATA, ManagedStorage International and Virident Systems. He also served as Chief Strategist at StorageTek where he was instrumental in the creation of the Storage Network Industry Association (SNIA). Most recently, Walt was Sr. Global Director of Product Marketing at Western Digital. He has a BA from William Jewell College and an MBA from the University of Denver.
00:00 Introduction, Walt’s Presentation
02:00 Pavilion Data in numbers
02:23 Traditional all-flash array architectures aren’t designed for NVMe because of bottleneck around controllers
03:25 Rebuild times in case of a node loss (cca 25 minutes per TB) put a limit to the node capacity of direct attached storage in scale-out storage architectures
05:25 Pavilion Data: A storage design inherently built for NVMe: scalable controllers, plenty of networking (40x 100 GbE), a switch-based PCIe backplane, and the ability for customers to source the NVMe drives of their choice
06:40 Walt explains that Pavilion Data’s architecture allows for a data rebuild at a rate of 5 minutes per TB
07:32 Use cases, industries and verticals for Pavilion Data
08:22 The perfect fit for Pavilion Data: scale-out applications leveraging Cassandra, MongoDB, MariaDB etc.
08:38 The Pavilion Data – Pivotal partnership – supporting Greenplum (an open-source massively parallel data platform for analytics, machine learning and AI)
09:01 A take on financial services, massively distributed databases, and backup challenges with multi-petabyte data lakes
10:20 Talking about protocols (Pavilion Data is block-based) and clustered filesystems (Spectrum Scale, etc.)
11:41 Continuing the discussion on supercomputing and massively parallel compute, media & entertainment, as well as government
13:05 Describing the physical aspects of a Pavilion Data system
13:50 A 4U, fully fault-tolerant system achieving 120 Gb/s reads or 90 Gb/s writes – what is the equivalent with a traditional AFA?
15:05 The metaphor of the nail and the hammer
15:31 Partnerships & Sales – how to engage Pavilion Data
16:25 The partnership with Dell
17:30 The synergy between Pivotal and Pavilion Data – embracing customer needs
In this 23rd episode of the TECHunplugged Podcast we welcome Kam Eshghi, VP of Strategy & Business Development at Lightbits Labs. This episode was recorded live at Dell Technologies World 2019 (early May 2019) in Las Vegas.
Lighbits Labs have developed a software-defined storage solution leveraging NVMe over TCP. Their solution allows the disaggregation of storage from compute by offering DAS performance with enterprise-class data services, combined with massive scalability.
A sizeable part of the founding team was behind DSSD, a storage solution which was later purchased by Dell EMC. DSSD was one of the very first storage architectures leveraging NVMe drives, which gives some confidence about Lightbits Labs NVMe over TCP concept. Lightbits Labs was founded 3 years ago and went out of stealth mode in April 2019, so we’re really thrilled to get prime time with them!
During the course of the discussions, podcast co-host Max Mortillaro (@darkkavenger) talks with Kam about the product architecture, its concepts and the use cases for NVMe over TCP.
00:00 Presentations, introduction to Lightbits Labs and NVMe over TCP
02:00 Addressing the challenge of scalability of DAS and performance of traditional architectures
03:30 The genesis of NVMe over TCP, and why NVMe over TCP is relevant today
05:20 Kam states that no specific drivers are needed, and that Lightbits Labs made its source code available
05:45 All of the intellectual property of Lightbits Labs resides on the target side; Kam also mentions that Lightbits Labs sell a hardware solution called SuperSSD, and that they also have an optional accelerator card
06:22 Max & Kam discuss about partnerships & go to market strategy: software-only, via Dell OEM, or via the Lightbits SuperSSD appliance
08:00 Kam: « All you need to get started is a server with NVMe drives and a standard Ethernet NIC »
08:30 Let’s talk architecture and data services
09:45 Kam mentions a « Global FTL » – that has Max interested in understanding how the internal logic of NVMe SSD drives is managed
12:30 More insights into data services
13:45 Understanding the customer base and use cases for Lightbits Labs: SaaS companies, Service Providers, etc.
15:40 Talking about data protection, replication, and availability
16:30 Application use case: distributed databases (Cassandra, MongoDB, etc.) and High-Performance Analytics workloads, and over all anything that requires high performance and operates at scale
18:00 Max’s usual « WOW » / speechless moment; Kam shares his excitement about adopting customers, not only Hyperscalers but also Private Cloud initiatives in with Enterprise IT organisations
19:00 Since Lightbits Labs is block-based, Max raises the question as to whether there are any plans to offer managed services in the plan, if it makes sense at all
20:15 Covering the topic of licensing models
21:09 What about the optional acceleration card? Kam explains that the card sits on storage nodes, and the decision points about why it may make sense to use the acceleration card. This offers flexibility to customers who may want to select entry level CPUs to keep costs in control.
In this 22nd episode of the TECHunplugged Podcast we welcome Josh Goldenhar, VP of Products at Excelero. Prior to joining Excelero, Josh was responsible for product strategy and management at EMC (XtremIO) and DataDirect Networks.
Podcast co-hosts Max Mortillaro (@darkkavenger) and Arjan Timmerman (@Arjantim) talk with Josh about Excelero, the solution’s architecture, its use cases & differentiators.
00:00 Presentations & introduction to Excelero
01:50 We learn that Excelero is software-based and uses NVMe drives. We ask Josh about whether there is a Hardware Compatibility List. Josh goes on to talk about custom built vs. co-engineered solutions (Dell, SuperMicro) and mentions a recently announced partnership with Lenovo.
03:25 Josh explains how Excelero was built from the start to be hardware agnostic and provides a perspective about how each hardware vendor is looked at based on their own hardware specificities.
04:46 Consuming Excelero: what customers need to do to get their Excelero storage up and running: either via pre-built appliances or via custom built hardware with installation of a couple RPM packages.
06:40 Is Excelero block-based or file-based? Josh explains that Excelero is a block-based distributed storage and provides background about the rationale to go block-based only.
08:40 We ask Josh about Excelero’s customers and their use cases. Low latency, consistency in response times, and ability to scale are key to those customers. Josh then goes on to explain some of the common challenges faced by web-scalers and how Excelero fits in the picture.
11:25 The case of Technicolor, an Excelero customer – how the motion picture industry requires bandwidths akin to those used in HPC clusters
12:35 Excelero storage deployment modes (disaggregated vs. converged) and their technical implications
15:35 A look into network interconnects that are supported by Excelero and throughput capabilities
19:20 Talking about HPC and Local Burst Buffer / Local Scratch – integration with SLURM job scheduler and local nodes
23:50 Being mind blown and forgetting a question – the Venn Diagram of Happy and Sad
24:20 Remembering a question – does Excelero supports different media / performance tiers (such as SATA SSD or HDD), or 3D XPoint. Josh provides a comprehensive view of what is supported backed by Excelero’s rationale about why things are being done or implemented in a specific way.
27:30 How / where to purchase Excelero, and how is it licensed?
In this 21st episode of the TECHunplugged Podcast we welcome Barbara Murphy, VP of Marketing at Weka.io. Weka was presenting at Storage Field Day 18 where Max had the opportunity to be physically present, and Arjan followed remotely thanks to the magic of the Interwebs(tm). Weka.io claims to have the fastest, most scalable parallel file system for AI and for technical compute workloads. We set on a journey to ask Barbara Murphy some questions right about that topic.
Podcast co-hosts Max Mortillaro (@darkkavenger) and Arjan Timmerman (@Arjantim) talk with Barbara about her background and about Weka.io. It’s been a privilege for us to talk with Barbara as she delivered a truly mind-blowing presentation at Storage Field Day 18. Barbara has more than 20 years of experience in the industry with past C-Level leadership positions in marketing at HGST, Panasas and others.Show schedule:
00:00 Presentations & Introduction
01:15 The weka.io architecture – a software-defined storage solution
02:47 The weka.io filesystem and supported protocols
04:06 What are the most common customer use cases for Weka.io, and what is the adoption model? Are customers starting with AI/HPC and expanding weka.io usage to other use cases?
06:10 What kind of storage media is recommended / supported with this software-defined storage system?
08:30 Talking about the weka.io object store
10:00 Scalability and performance – how to scale the filesystem, and is increase in performance predictable?
11:52 What is the relationship between weka.io filesystem and other AI / HPC filesystems such as BeeGFS or GPFS?
14:10 What is weka.io doing to stay ahead of the competition in massively parallel filesystems?
16:01 Talking about Storage Field Day 18 and monster performance numbers
17:42 Are benchmarks a truly reliable way to assess real world performance?
19:18 Talking about weka.io partners, how can customers purchase the product and how it is licensed
21:37 AWS was mentioned earlier, are cloud deployments supported? If yes, on which public cloud providers?
During this podcast our host Arjan Timmerman talks with Tom van Peer who is the Enterprise Presales lead at Dell Technologies in the Netherlands. As the conversation is on the Datacenter technology that is provided by Dell Technologies, and thus focuses on things like Dell, EMC, VMware, Pivotal and other products in the Dell Technologies portofolio.
With Tom being someone that started his career at Philips, working on AI many years ago, we also focus on the AI trends that Tom sees these days, and what that means for a company like Dell Technologies. Also we talk about two upcoming events. One the Netherlands where Tom will talk about AI and also is focused around AI. You can find the (free) registration link below:
The conversation is on what we think about Dell Technologies, the past but also the road ahead for this great company. The companies under the Dell Technologies umbrella offer a great opportunity but need the right guidance and steering to act as one. Also Max takes us back to the Dell Technologies analyst event in Chicago late 2018 where he heard directly from Michael Dell and his team on the Dell Technologies vision.
We also talk about the opportunity we get to be at Tech Field Day Extra at Dell Technologies World 2019, and the awesomeness and opportunities provided by the Tech Field Day Team, which by the way celebrates its ten year anniversary (congrats team!!). Looking at their website we talk a little about the #TFDx sponsor Liqid and the composable infrastructure.
Last but not least we go into what we look forward on hearing at the event, as well as some of the #DTW sponsors like Komprise (listen to our podcast with them here) and Data Management and where we would like to see (and hear) Dell Technologies focus on the next couple of years.
In this 18th episode of the TECHunplugged Podcast we welcome storage & flash memory industry veteran Rob Peglar (@peglarr) who is also President of Advanced Computation and Storage LLC, and Member of the SNIA Board of Directors. For the younger ones, Rob is the former VP of Advanced Storage Memory at Micron (hello 3D XPoint) and former Americas CTO for Isilon (then EMC, today Dell Technologies).
Podcast co-hosts Max Mortillaro (@darkkavenger) and Arjan Timmerman (@Arjantim) talk with Rob about many exciting topics such as the evolution of the computer industry from a memory and architectural concepts perspective, to then discuss about the latest innovations in persistent memory, 3D XPoint, 3D NAND and other industry matters.
In the 17th episode of the TECHunplugged Podcast we’re joined by Boyan Ivanov, co-founder of StorPool Storage. Podcast co-hosts Max Mortillaro (@darkkavenger) and Arjan Timmerman (@Arjantim) talk with Boyan about the Storpool product offering, how the product has evolved, why customers are choosing for the Storpool product and what can we expect from Storpool in the future.
00:00 Max: Introduction
00:40 Boyan: What and who is Storpool and what do they do?
02:32 Arjan: What is the Storpool journey?
03:08 Boyan: The history and a little future of Storpool
04:25 Max: What is the primary driver of Storpool clients to choose them?
04:50 Boyan: Storpool wants to be the best at block based storage and delivers on this
6:34 Max: Could you tell a bit more on the Architecture of your product?
6:47 Boyan: Started coding at the age of ten and does a bit deeper dive into the architecture
8:48 Arjan: Not being a Silicon Valley company, how do you compete (or not) with them?
9:38 Max: Another question on that, is the Storpool customer base European or Global?
9:54 Boyan: Answer both questions on competing against Silicon Valley companies and being a global company
12:08 Max: So Storpool being funded in much different way doesn’t feel the VC pressure as much?
12:41 Boyan: Correct. Having a business that is resilient and self-funded is a much better sustainable way to grow a company
14:04 Arjan: What is a feature your customer base is requesting at the moment?
14:42 Boyan: Software approach first (SDS) and Backup and Recovery second (and everything in between)
16:49 Arjan: Is that than something you’re building into the platform, or do you have partners helping you provide certain solutions?
17:25 Boyan: Storpool is a pure software company, but also a storage team as a service
19:15 Arjan: Can Storpool customers use on-prem and cloud data storage as one?
19:48 Boyan: Not yet, but as the way forward is hybrid for Storpool this will be added
20:53 Arjan: What about API’s, is Storpool as a software company API driven?
21:03 Boyan: The Storpool product is entirely API driven
A last question from Arjan is on monitoring and Boyan answers this that Storpool provides its own monitoring as well as giving customers the opportunity to use the API and other tools to hook into their already existing monitoring tools.
TECHunplugged would like to Thank Boyan and Storpool for this very informative podcast and we’re looking forward to hearing a lot more from this great company!
In this 16th episode of the TECHunplugged Podcast (recorded on 24-Jan-2019) we’re welcoming Filip Verloy, EMEA Field CTO for Rubrik. Podcast co-hosts Arjan Timmerman (@Arjantim) and Max Mortillaro (@darkkavenger) talk with Filip about a lot of things: his tenure at Rubrik, how the product has evolved, why customers are interested in Rubrik’s approach, what is that new Data Management trend, and what can we expect from Rubrik going forward.
01:10 Is three years at a startup a long time?
01:46 Rubrik’s mission statement in Filip’s own words
03:17 Is Rubrik a sort of “iPhone revolution” for Data Protection?
06:31 What is the most valuable reason why customers choose Rubrik over their competition?
10:22 Polaris & metadata collection use cases
13:53 Radar: the first application on top of Polaris; the concept behind Radar and how it helps Rubrik customers
17:04 Beyond Radar: what can we expect next, and what about leveraging metadata to comply with GDPR?
20:26 Rubrik and the cloud / multicloud story: what can customers do, what are the options?
23:05 Getting a unified view of all datasets across multiple clouds with Rubrik Polaris, and helping move data between clouds
25:30 How can customers consume the Rubrik products / platform, and are there any OEM Alliances / Partnerships?