EP28 – Introducing Clumio, A Cloud-Based Data Platform Launching With Data Protection As A Service – with Poojan Kumar



In this 28th episode of the TECHunplugged Podcast we welcome Poojan Kumar, Co-Founder and CEO at Clumio.

Podcast co-hosts Max Mortillaro and Arjan Timmerman talk with Poojan about his new company, what led him to enter a different market and what Poojan hopes to build with Clumio.

About Clumio

Clumio is a secure, backup as a service that consolidates the protection of an enterprise data center and any remote sites with no hardware or software to size, configure, manage – or even buy at all.  As enterprises move aggressively to cloud, they can use Clumio to protect workloads like VMware Cloud on AWS and native AWS services.

Clumio, innovator of authentic SaaS for enterprise backup, today (13-Aug-19) announced $51M in funding from leading Silicon Valley investors and officially launched its flagship backup as a service product. With this new service, enterprises can eliminate hardware and software for on-premise backup and avoid the complexity and cost of running 3rd party backup software in the cloud. By taking full advantage of cloud scale, economics, and elasticity, there is now a secure and efficient way to protect on-premise, VMware Cloud for AWS, and native AWS service workloads.

About Poojan

Poojan Kumar is the co-founder and CEO at Clumio. Poojan brings 18 years of experience in cloud computing and storage and is known for seeing an opportunity for change, innovating and capitalizing on it. Poojan founded and built PernixData that was acquired by Nutanix in 2016, he then served as Vice President of Engineering and Products. Earlier in his career, he was Head of Data Products at VMware and founder at Oracle Exadata.

Show Schedule:

  • 00:00 Show Introduction
  • 01:00 Poojan’s presentation – Exadata, PernixData and the Nutanix acquisition
  • 02:23 Introducing Clumio, Poojan’s latest company
  • 05:05 The « why » of Clumio – a shift from on-premises towards public cloud, and the « SaaS-ification » of products & platforms
  • 06:47 The « what » of Clumio : building a data platform on top of the public cloud, starting with AWS
  • 07:05 The first brick of the Clumio data platform : Data Protection, or delivering Backup-as-a-Service
  • 08:10 Will this first brick expand to other public cloud providers?
  • 09:22 Initial focus is on AWS, with multiple US regions supported at launch
  • 09:40 VMware on-premises as well as VMware on AWS will also be supported at launch
  • 10:55 Is Software-as-a-Service relevant to the way Clumio is consumed, or is Clumio’s intent to backup SaaS applications, or is it a mix of both?
  • 11:52 Poojan : « we are not in the business of selling infrastructure »
  • 12:21 How to get started with Clumio
  • 15:50 How is Clumio’s data stored on the cloud? Is this using S3, or is there any proprietary file system in the background?
  • 18:03 What is Clumio’s view on bandwidth / throughput required to backup the data – are there any data reduction methods applied at the source?
  • 22:20 Arjan’s considerations on bandwidth related matters for customers in the Asia Pacific markets
  • 23:18 Talking about Clumio’s initial customers
  • 25:48 In a crowded market such as Data Protection, what is the main differentiator that Clumio brings to the table?
  • 28:08 What is Clumio’s consumption model, and how can customers purchase the Clumio solution
  • 30:06 Final Comments & Conclusion

EP27 – VAST Data – A Revolutionary Storage Platform For The Next Decade – with Howard Marks



In this 27th episode of the TECHunplugged Podcast we welcome Howard Marks, Technologist Extraordinary and Plenipotentiary at VAST Data.

Podcast co-hosts Max Mortillaro and Arjan Timmerman talk with Howard about VAST Data, his move to the dark side (we hear they have nice cookies at VAST Data), but also about the current state of all flash arrays, and why VAST Data is making a difference in the storage world.

VAST Data is bringing revolutionary flash economics to the market by combining 3D XPoint, NVMe-oF and QLC Flash, combined with in-house data reduction technologies, effectively delivering a single tier of storage that delivers outstanding performance at the price point of disk-based storage.

About VAST Data

VAST Data’s mission is to bring an end to decades of complexity and application bottlenecks that have been caused by mechanical media and by the complex tiering of data across different types of storage systems. To achieve their goals, they reduce the problem in order to achieve exponential gains. The result: a dramatically simplified customer experience that marries with the ability to compute on vast reserves of data all in real time.

Over three years, the VAST story has transformed from concept to reality. Since releasing V1 of VAST Data’s Universal Storage concept in November of 2018, VAST has now established itself as a leader among the fastest growing IT infrastructure companies of all time.

About Howard

In over 25 years as an independent consultant Howard has built and/or reengineered server and storage infrastructures and networks for organizations from Borden Foods and The State University of New York at Purchase to accounting and law firms. He started testing and reviewing products at PC Magazine in the late 80s and has written hundreds of articles and product reviews for Network World, Network Computing and InformationWeek amongst others.

He’s spoken at Comdex, Interop, Networks Expo and developed training programs for organizations including JP Morgan and American Express. Where other analysts typically have marketing or sales backgrounds Howard’s continuing involvement with users facing real problems brings a perspective those users find more useful.

Howard Marks, Technologist Extraordinary and Plenipotentiary at VAST Data

Show Schedule:

  • 00:00 Introduction
  • 01:30 Crossing the line – going from independent to working for a storage vendor
  • 01:45 A bit of background / history about drivers to all-flash arrays
  • 02:35 The challenge of storage tiers & data fragmentation in the data center
  • 03:25 VAST Data as a universal all-flash storage platform covering a pyramid of use cases
  • 03:50 Flash at the price of spinning disks, what’s behind it?
  • 04:45 Talking about tiers and performance
  • 05:45 « The data you want to process is always on the wrong tier at some point in time »
  • 06:45 How is the VAST Data promise achieved? What’s the « magic » behind it?
  • 07:15 VAST Data: a clean slate, 3rd-gen all flash array design built around 2018 technology & concepts – 3DX Point, NVMe-oF, QLC Flash
  • 08:00 A matter of endurance: avoiding QLC Flash wear with 3D XPoint and very wide & deep data stripes
  • 09:31 Data reduction mechanisms: challenges of existing technology and how VAST Data handles these
  • 11:24 A look into the disaggregated, shared everything VAST Data architecture
  • 14:06 Data reduction as the second piece of VAST Data’s secret sauce: global deduplication & compression  based on similarity
  • 17:07 Talking about VAST Data technology concepts on the VAST Data blog
  • 17:30 The third VAST Data secret: scalability, or « our systems are vast »
  • 19:08 What are VAST Data typical customers & use cases
  • 21:33 Discussing about slicing & dicing a VAST Data system, as well as multi-tenancy use cases
  • 22:57 What data services are offered by VAST Data beyond data reduction?
  • 23:39 Consuming VAST Data – what is the selling model, and what customers need to buy?
  • 24:11 Being selective with partners – managing selling petabytes or exabytes scale can be challenging
  • 24:29 Entry point – one VAST Data enclosure and four VAST servers – similarly packaged as an HCI appliance – 100 GbE ethernet switches are also provided
  • 26:30 Howard’s perspective on density – from the « Petabyte Data Center » to a petabyte in 1U
  • 26:55 VAST Data’s largest installation
  • 28:23 Storing all the data in a single tier eliminates the pain of pre-staging data from slower tiers to high speed scratch space
  • 29:56 Each storage array on Earth falls under one of these three models: straight scale up (2+ controllers); shared-nothing scale out; or a combination of scale up & scale out
  • 31:44 VAST breaks those three models – 2018 technology makes this possible
  • 34:15 Final comments

More about VAST Data from our Storage Field Day friends:

[vimeo 320386734]

[vimeo 320389906]

[vimeo 320386927]


EP26 – CyberArk – Adversary Simulation: The Red Team Is Your Friend – with Shay Nahari



In this 26th episode of the TECHunplugged Podcast we welcome Shay Nahari, Head of Red Team Services at CyberArk. This episode was recorded live at CyberArk Impact in Amsterdam, in May 2019.

Podcast co-hosts Max Mortillaro (@darkkavenger) and Arjan Timmerman (@arjantim) talk with Shay about the CyberArk Red Team activities, adversary simulation services, identifying critical assets and protecting them.

About CyberArk

CyberArk is the global leader in privileged access security, a critical layer of IT security to protect data, infrastructure and assets across the enterprise, in the cloud and throughout the DevOps pipeline. CyberArk delivers the industry’s most complete solution to reduce risk created by privileged credentials and secrets. The company is trusted by the world’s leading organizations, including more than 50 percent of the Fortune 500, to protect against external attackers and malicious insiders.

CyberArk pioneered the market and remains the leader in securing enterprises against cyber attacks that take cover behind insider privileges and attack critical enterprise assets. Today, only CyberArk is delivering a new category of targeted security solutions that help leaders stop reacting to cyber threats and get ahead of them, preventing attack escalation before irreparable business harm is done.

About Shay

Shay Nahari is the head of Red Team services at CyberArk, where he specializes in targeted cyber operations, malware evasion and offensive research. With nearly two decades of cyber security experience, he’s on the front lines in helping global organizations improve their ability to detect and react to targeted attacks using adversary simulation and advanced real life tactics, techniques and procedures.

Nahari previously founded and served as CEO of Red-Sec Inc., a Red Team and consulting services provider, and as a commander in the Israel Defense Forces (IDF) communications unit. With a passion for hacking, he’s won multiple capture the flag competitions – including at Black Hat 2018, where he received the Specter Ops Black Badge.

Shay Nahari, Head of Red Team Services at CyberArk

Show schedule:

  • 00:00 Introduction & Presentation
  • 00:48 Activities in focus for the CyberArk Red Team
  • 01:35 Differentiating between adversary simulation services (Internal vs External adversaries)
  • 02:30 Two questions customers should ask themselves: what are their crown jewels, and what risks are they trying to protect against
  • 03:00 Are Red Teams our friends?
  • 05:35 Helping customers focus on protecting the right pieces of their infrastructure
  • 07:10 Identifying the attack surface, and defining privileged access
  • 08:15 « Each employee is an attack surface, identities are the new perimeter »
  • 09:05 Privileged access goes way beyond admin rights
  • 10:20 How the shift to cloud and containers is impacting the security landscape
  • 11:10 « Ansible access is the new domain admin »
  • 11:50 Cloud makes undetected data leakage possible
  • 12:45 Talking about vulnerabilities and privilege escalation mechanisms – credential abuse is the most common way to get inside a network
  • 14:30 Protecting credentials and isolating sessions as a way to reduce the attack surface
  • 15:00 How do the « bad guys » in the Red Team work with the « good guys » in the Blue Teams? What does the collaboration looks like, and how do the teams interact together?
  • 16:00 « When we get hired, our job is to make our customers more secure »
  • 17:00 Red Teams can be influenced by the creativity of Blue Teams
  • 18:05 Conclusion: words of advice, shifts in the industry, and supply chain attacks
  • 20:30 End

EP25 – Dell Technologies: Storage Portfolio, Data Strategy and VMware’s Key Role – with Pierluca Chiodelli



In this 25th episode of the TECHunplugged Podcast we welcome Pierluca Chiodelli, VP of Product Management Storage Portfolio & Customer Operations at Dell EMC. This episode was recorded live at Dell Technologies World 2019 in Las Vegas.

Podcast co-host Max Mortillaro (@darkkavenger) talks with Pierluca about his new role, the “Data Story” of Dell Technologies, what we can expect from the Dell Technologies storage portfolio (especially in terms of midrange storage portfolio consolidation), as well as the growing role of VMware in the entire Dell Technologies ecosystem.

About Pierluca

Pierluca Chiodelli is currently the VP of Product Management for Storage Portfolio and Customer Operations. Pierluca’s organization oversees the Portfolio Strategy for the Storage BU and is leading the technical resources across the major storage products.

The teams are responsible for creating a single storage portfolio vision and drive solutions to enable end users and service providers to transform their operations and deliver information technology as a service.

Pierluca has been with DellEMC since 1999, with experience in field support and core engineering across Europe and the Americas. Prior to joining EMC, he worked at Data General and as a consultant for HP Corporation. Pierluca holds one degree in Chemical Engineering and a second one in Information Technology.

Show schedule:

  • 00:00 Introductions
  • 01:25 What are Dell Technologies plans to translate their « Data » story into palpable solutions, and how they will bake this into their portfolio
  • 02:40 Customers can buy services in the cloud that have similar capabilities as on-premises products
  • 03:00 The storage portfolio should cover the edge, core, and cloud. Beyond coverage, the necessity to integrate automation in all products
  • 04:00 From a Dell Technologies perspective, VMware Cloud Foundation is the interconnect hub for the entire DT portfolio
  • 04:57 Is Dell moving to a data-centric approach to storage, and what about data mobility?
  • 07:30 Data migration and automation integrations
  • 08:08 Storage is the foundation of everything – are there plans for Dell Technologies to eventually offer a one-stop-shop / portal to consume data and cloud services?
  • 09:40 Unity Cloud Edition – an SDS implementation of Unity for the cloud
  • 11:17 VMware keeps being mentioned in a storage discussion, what is the role of VMware in the Dell Technologies -and especially in the Dell EMC- future?
  • 14:01 Some insights around the rationalisation of Dell EMC’s very broad product portfolio
  • 16:05 The importance of maintaining the installed base, and having a loyal relationship with customers
  • 17:30 More than a portfolio rationalisation, a rationalisation of how platforms and solutions are built
  • 18:20 Final comments: rationalisation is a journey

EP24 – Pavilion Data: NVMe-oF for Modern Scale-Out Applications and Massively Parallel Compute – with Walt Hinton



In this 24th episode of the TECHunplugged Podcast we welcome Walter Hinton, Head of Corporate and Product Marketing at Pavilion Data. This episode was recorded live at Dell Technologies World 2019 in Las Vegas.

Podcast co-host Max Mortillaro (@darkkavenger) talks with Walt about the challenges with traditional all-flash infrastructures and the specific needs of modern scale-out applications. Walt covers extensively those topic, then goes on to explain why NVMe-based systems need purpose-built architectures, and how Pavilion Data is a great fit for scale-out / massively parallel applications or filesystems.

About Pavilion Data

Pavilion Data is the industry’s leading NVMe-oF Storage Array. It is a true end-to-end NVMe solution, from the host all the way down to the media. The Array is 100% standards compliant with zero host-side presence and was designed for the modern massively parallel clustered web and analytics applications.

Pavilion’s Storage Array delivers the next generation of composable disaggregated infrastructure (CDI) by separating storage from computing resources to allow them to scale and grow independently. In today’s large-scale environments it allows customers to become more agile by delivering the exact amount of composable resources at any given time.

Top view of a Pavilion Data system showing the NVMe drives, the PCIe Fabrics, and the high speed ethernet interconnects – the rack size of a system is 4U

About Walt

Walt is responsible for Corporate and Product Marketing.  He brings a deep technical background along with proven success in building marketing teams and programs.  Over a 25-year career in data storage, Walt has helped build successful startups like McDATA, ManagedStorage International and Virident Systems. He also served as Chief Strategist at StorageTek where he was instrumental in the creation of the Storage Network Industry Association (SNIA).  Most recently, Walt was Sr. Global Director of Product Marketing at Western Digital. He has a BA from William Jewell College and an MBA from the University of Denver.

Show schedule:

  • 00:00 Introduction, Walt’s Presentation
  • 02:00 Pavilion Data in numbers
  • 02:23 Traditional all-flash array architectures aren’t designed for NVMe because of bottleneck around controllers
  • 03:25 Rebuild times in case of a node loss (cca 25 minutes per TB) put a limit to the node capacity of direct attached storage in scale-out storage architectures
  • 05:25 Pavilion Data: A storage design inherently built for NVMe: scalable controllers, plenty of networking (40x 100 GbE), a switch-based PCIe backplane, and the ability for customers to source the NVMe drives of their choice
  • 06:40 Walt explains that Pavilion Data’s architecture allows for a data rebuild at a rate of 5 minutes per TB
  • 07:32 Use cases, industries and verticals for Pavilion Data
  • 08:22 The perfect fit for Pavilion Data: scale-out applications leveraging Cassandra, MongoDB, MariaDB etc.
  • 08:38 The Pavilion Data – Pivotal partnership – supporting Greenplum (an open-source massively parallel data platform for analytics, machine learning and AI)
  • 09:01 A take on financial services, massively distributed databases, and backup challenges with multi-petabyte data lakes
  • 10:20 Talking about protocols (Pavilion Data is block-based) and clustered filesystems (Spectrum Scale, etc.)
  • 11:41 Continuing the discussion on supercomputing and massively parallel compute, media & entertainment, as well as government
  • 13:05 Describing the physical aspects of a Pavilion Data system
  • 13:50 A 4U, fully fault-tolerant system achieving 120 Gb/s reads or 90 Gb/s writes – what is the equivalent with a traditional AFA?
  • 15:05 The metaphor of the nail and the hammer
  • 15:31 Partnerships & Sales – how to engage Pavilion Data
  • 16:25 The partnership with Dell
  • 17:30 The synergy between Pivotal and Pavilion Data – embracing customer needs
  • 19:15 Talking about worldwide availability
  • 20:28 Closing remarks

EP23 – Lightbits Labs: NVMe Flash Performance Commoditization via NVMe over TCP – with Kam Eshghi



In this 23rd episode of the TECHunplugged Podcast we welcome Kam Eshghi, VP of Strategy & Business Development at Lightbits Labs. This episode was recorded live at Dell Technologies World 2019 (early May 2019) in Las Vegas.

Lighbits Labs have developed a software-defined storage solution leveraging NVMe over TCP. Their solution allows the disaggregation of storage from compute by offering DAS performance with enterprise-class data services, combined with massive scalability.

A sizeable part of the founding team was behind DSSD, a storage solution which was later purchased by Dell EMC. DSSD was one of the very first storage architectures leveraging NVMe drives, which gives some confidence about Lightbits Labs NVMe over TCP concept. Lightbits Labs was founded 3 years ago and went out of stealth mode in April 2019, so we’re really thrilled to get prime time with them!

During the course of the discussions, podcast co-host Max Mortillaro (@darkkavenger) talks with Kam about the product architecture, its concepts and the use cases for NVMe over TCP.

Show schedule:

  • 00:00 Presentations, introduction to Lightbits Labs and NVMe over TCP
  • 02:00 Addressing the challenge of scalability of DAS and performance of traditional architectures
  • 03:30 The genesis of NVMe over TCP, and why NVMe over TCP is relevant today
  • 05:20 Kam states that no specific drivers are needed, and that Lightbits Labs made its source code available
  • 05:45 All of the intellectual property of Lightbits Labs resides on the target side; Kam also mentions that Lightbits Labs sell a hardware solution called SuperSSD, and that they also have an optional accelerator card
  • 06:22 Max & Kam discuss about partnerships & go to market strategy: software-only, via Dell OEM, or via the Lightbits SuperSSD appliance
  • 08:00 Kam: « All you need to get started is a server with NVMe drives and a standard Ethernet NIC »
  • 08:30 Let’s talk architecture and data services
  • 09:45 Kam mentions a « Global FTL » – that has Max interested in understanding how the internal logic of NVMe SSD drives is managed
  • 12:30 More insights into data services
  • 13:45 Understanding the customer base and use cases for Lightbits Labs: SaaS companies, Service Providers, etc.
  • 15:40 Talking about data protection, replication, and availability
  • 16:30 Application use case: distributed databases (Cassandra, MongoDB, etc.) and High-Performance Analytics workloads, and over all anything that requires high performance and operates at scale
  • 18:00 Max’s usual « WOW » / speechless moment; Kam shares his excitement about adopting customers, not only Hyperscalers but also Private Cloud initiatives in with Enterprise IT organisations
  • 19:00 Since Lightbits Labs is block-based, Max raises the question as to whether there are any plans to offer managed services in the plan, if it makes sense at all
  • 20:15 Covering the topic of licensing models
  • 21:09 What about the optional acceleration card? Kam explains that the card sits on storage nodes, and the decision points about why it may make sense to use the acceleration card. This offers flexibility to customers who may want to select entry level CPUs to keep costs in control.
  • 23:30 Thanks and Conclusion

EP22 – Excelero – NVMe Software-Defined Storage on Steroids – with Josh Goldenhar



In this 22nd episode of the TECHunplugged Podcast we welcome Josh Goldenhar, VP of Products at Excelero. Prior to joining Excelero, Josh was responsible for product strategy and management at EMC (XtremIO) and DataDirect Networks.

Podcast co-hosts Max Mortillaro (@darkkavenger) and Arjan Timmerman (@Arjantim) talk with Josh about Excelero, the solution’s architecture, its use cases & differentiators.

Show schedule:

  • 00:00 Presentations & introduction to Excelero
  • 01:50 We learn that Excelero is software-based and uses NVMe drives. We ask Josh about whether there is a Hardware Compatibility List. Josh goes on to talk about custom built vs. co-engineered solutions (Dell, SuperMicro) and mentions a recently announced partnership with Lenovo.
  • 03:25 Josh explains how Excelero was built from the start to be hardware agnostic and provides a perspective about how each hardware vendor is looked at based on their own hardware specificities.
  • 04:46 Consuming Excelero: what customers need to do to get their Excelero storage up and running: either via pre-built appliances or via custom built hardware with installation of a couple RPM packages.
  • 06:40 Is Excelero block-based or file-based? Josh explains that Excelero is a block-based distributed storage and provides background about the rationale to go block-based only.
  • 08:40 We ask Josh about Excelero’s customers and their use cases. Low latency, consistency in response times, and ability to scale are key to those customers. Josh then goes on to explain some of the common challenges faced by web-scalers and how Excelero fits in the picture.
  • 11:25 The case of Technicolor, an Excelero customer – how the motion picture industry requires bandwidths akin to those used in HPC clusters
  • 12:35 Excelero storage deployment modes (disaggregated vs. converged) and their technical implications
  • 15:35 A look into network interconnects that are supported by Excelero and throughput capabilities
  • 19:20 Talking about HPC and Local Burst Buffer / Local Scratch – integration with SLURM job scheduler and local nodes
  • 23:50 Being mind blown and forgetting a question – the Venn Diagram of Happy and Sad
  • 24:20 Remembering a question – does Excelero supports different media / performance tiers (such as SATA SSD or HDD), or 3D XPoint. Josh provides a comprehensive view of what is supported backed by Excelero’s rationale about why things are being done or implemented in a specific way.
  • 27:30 How / where to purchase Excelero, and how is it licensed?
  • 30:15 Conclusion

EP21 – Discovering Weka.io, the world’s fastest filesystem with Barbara Murphy



In this 21st episode of the TECHunplugged Podcast we welcome Barbara Murphy, VP of Marketing at Weka.io. Weka was presenting at Storage Field Day 18 where Max had the opportunity to be physically present, and Arjan followed remotely thanks to the magic of the Interwebs(tm). Weka.io claims to have the fastest, most scalable parallel file system for AI and for technical compute workloads. We set on a journey to ask Barbara Murphy some questions right about that topic.

Podcast co-hosts Max Mortillaro (@darkkavenger) and Arjan Timmerman (@Arjantim) talk with Barbara about her background and about Weka.io.  It’s been a privilege for us to talk with Barbara as she delivered a truly mind-blowing presentation at Storage Field Day 18. Barbara has more than 20 years of experience in the industry with past C-Level leadership positions in marketing at HGST, Panasas and others.Show schedule:

  • 00:00 Presentations & Introduction
  • 01:15 The weka.io architecture – a software-defined storage solution
  • 02:47 The weka.io filesystem and supported protocols
  • 04:06 What are the most common customer use cases for Weka.io, and what is the adoption model? Are customers starting with AI/HPC and expanding weka.io usage to other use cases?
  • 06:10 What kind of storage media is recommended / supported with this software-defined storage system?
  • 08:30 Talking about the weka.io object store
  • 10:00 Scalability and performance – how to scale the filesystem, and is increase in performance predictable?
  • 11:52 What is the relationship between weka.io filesystem and other AI / HPC filesystems such as BeeGFS or GPFS?
  • 14:10 What is weka.io doing to stay ahead of the competition in massively parallel filesystems?
  • 16:01 Talking about Storage Field Day 18 and monster performance numbers
  • 17:42 Are benchmarks a truly reliable way to assess real world performance?
  • 19:18 Talking about weka.io partners, how can customers purchase the product and how it is licensed
  • 21:37 AWS was mentioned earlier, are cloud deployments supported? If yes, on which public cloud providers?
  • 23:35 How to install weka.io?
  • 24:40 Conclusion

EP20 – Dell Technologies Netherlands – DataCenter Technology



During this podcast our host Arjan Timmerman talks with Tom van Peer who is the Enterprise Presales lead at Dell Technologies in the Netherlands. As the conversation is on the Datacenter technology that is provided by Dell Technologies, and thus focuses on things like Dell, EMC, VMware, Pivotal and other products in the Dell Technologies portofolio.

With Tom being someone that started his career at Philips, working on AI many years ago, we also focus on the AI trends that Tom sees these days, and what that means for a company like Dell Technologies. Also we talk about two upcoming events. One the Netherlands where Tom will talk about AI and also is focused around AI. You can find the (free) registration link below:

https://www.eiseverywhere.com/ehome/index.php?eventid=405241&tabid=856593

And the other event being Dell Technologies World in Las Vegas, where the TECHunplugged team will be as well. More information on that can be found at the link below:

https://www.delltechnologiesworld.com/index.htm

Happy listening, and stay tuned for some awesome new content coming soon!


EP 19 – TECHunplugged preview of Dell Technologies World 2019



In this episode TECHunplugged hosts Max Mortillaro and Arjan Timmerman will do a preview on Dell Technologies World 2019 which they will both attend as analysts.

The conversation is on what we think about Dell Technologies, the past but also the road ahead for this great company. The companies under the Dell Technologies umbrella offer a great opportunity but need the right guidance and steering to act as one. Also Max takes us back to the Dell Technologies analyst event in Chicago late 2018 where he heard directly from Michael Dell and his team on the Dell Technologies vision.

We also talk about the opportunity we get to be at Tech Field Day Extra at Dell Technologies World 2019, and the awesomeness and opportunities provided by the Tech Field Day Team, which by the way celebrates its ten year anniversary (congrats team!!). Looking at their website we talk a little about the #TFDx sponsor Liqid and the composable infrastructure.

Last but not least we go into what we look forward on hearing at the event, as well as some of the #DTW sponsors like Komprise (listen to our podcast with them here) and Data Management and where we would like to see (and hear) Dell Technologies focus on the next couple of years.

Happy listening!