Data is data no matter where it resides, yet the lifecyle of ingesting, storing, retrieving, using, and securing data and its eventual disposal is very important. There's no data center or cloud without the data, and data has been Dan's technology focus and interest for quite a long time. Computing power and network bandwidth advances coupled with fast data storage technologies have changed the game in last ten years or so. Current and future capabilities to leverage ML and AI in interpreting and accessing the data have radically expanded the possibilities for using data to do great things.
Dan has experience helping clients leverage their data, and whether it is housed and used in a traditional data center or in an off-premise cloud or colocation facility Dan can be of help.
Dan spent many years developing, architecting, and then selling data center technology. He has knowledge in a broad range of areas, including but not limited to:
- Data Storage
- Compute / Server
- Virtualization (Server, Network, Code)
- Network
- Software
- Automation
- Application Resilience
- Application Recovery
- Scalability
- Physical Characteristics (power, air, etc)
For many years Dan worked directly with IT and Application owners in his clients to improve their operations. He also worked with implementation teams and data center owners as the solutions were deployed.
While Dan never directly sold public cloud solutions, he has some experience with cloud solutions and indirectly pitched some of them. In fact, his first experiences with cloud were before "cloud" became synonymous with an off-prem location in common parlance but called "public cloud" in NIST parlance. (NIST's definition of cloud computing can be found here.)
That said, Dan has some experience in all four cloud deployment models: public cloud, private cloud, hybrid cloud, and community cloud.
Public Cloud
Public cloud is an amazing blessing and benefit. It has drastically accelerated the speed at which automation within technology has become reality, it has shaped the face of software architecture and design, and it has drastically increased the availability of technology to the general public - including to the one-person shop that is just building the protype for their technology business but lacks funds to implement a data center.
Public cloud is useful for many things, but not all things. There are cases for private cloud, hybrid cloud, and community cloud as well. When it comes to positioning solutions, those solutions should account for things like:
- state of client technology at the time
- direction client wants to go with technology
- business technology choices that must be made (e.g., is cloud "lock-in" acceptable?)
- need for application availability and resilience
- value of the data
Dan's personal belief is that if you're going to go all-in on public cloud, at least do it on two different clouds and ideally mandate that the application architecture for the most part work on either cloud. He is not a fan of cloud "lock-in", and has heard from clients who suffered pain from being so heavily invested in one vendor that when something changed and they wanted to move, they had a very difficult time moving their applications.
Yes, it costs more to architect applications to run across multiple clouds. Yes, it may limit you in some respects if you decide you want zero lock-in, as you're unable to use the latest and greatest features until all your cloud vendors support it. Yet from a long-term perspective Dan believes it is the best thing to do.
He's not ignorant, though, of the reality that few companies are investing in the cost to implement for full multi-cloud portability. He's also happy to see that many companies are choosing to at least split their investments across two more more cloud vendors, with some applications in each vendor's public cloud but few or no applications in both clouds.
Dan's direct exposure to public clouds is predominately with Amazon AWS, and Microsoft Azure.
Private Cloud & Hybrid Cloud
Dan worked for a large, multinational technology company that focused on supplying the resources for data centers as well as end users. They owned a company that essentially became the defacto standard for private cloud, and which later developed good solutions for hybrid cloud. They also owned a software business that helped clients implement applications independent from the cloud they ran on - which truly enabled multi-cloud operability.
So Dan's sales and technology history is rich in the area of private and hybrid cloud. He still believes on prem has its place, and there is a reality that some core assets will continue to be on private clouds for a while if for no other reason than they're really hard to move.
Today there is a tension between private and public cloud because public cloud does offer legitimate benefits and increasingly compelling options, and the public cloud vendors would (rightfully) enjoy having all the business given to them. Private clouds, however, still offer value in areas that public clouds cannot, and private clouds offer complete control over the digital assets, hardware assets, and operations.
What is the right balance between private, hybrid, and public cloud?
Dan would say that it depends on the client, their needs, their level of risk tolerance, technology objectives, and their growth forecasts.
At the end of the day, Dan is well-versed in private and hybrid cloud. He'll be happy to talk to you about it.
Community Cloud
During his time in sales Dan worked closely for a while with a community cloud vendor or two... these were vendors used or being evaluated by his clients. Community cloud is much like private cloud from the infrastructure standpoint, but the type of contract chosen by a client may include varying levels of abstraction, services, or general management. It can range from essentially an off-premise data center run by the client using leased resources to fully outsourced and managed leased resources with some degree of interfacing/automation.
Of the four cloud models, Dan worked least on the Community Cloud model.
Much of Dan's sales career focused on application resilience and recoverability. He worked with large corporations that may be able to tolerate some downtime on a tertiary end-user application but could not tolerate downtime on primary applications without business impact.
Over time applications have become more resilient, and that has to some degree been empowered by advances in network connectivity bandwidth and speed available between application instances. For example, Oracle has become much more resilient in the sense that now the replication and recovery can very feasibly done at the software level rather than using the hardware below the application to replicate.
Dan has thought a lot about application resilience. He certaintly doesn't know everything, but he'll be happy to share what he knows.
Dan worked closely with virtualization teams at his clients. He's familiar with virtualization concepts and technologies. He has been running virtualized systems at home for more than a decade.
One of the newer and very exciting forms of virtualization/abstraction is containers. Dan has had exposure to containers as well. Containers is an exciting area of technology and offers many benefits to IT and application deployment today.
Dan has some experience with IT automation technologies. Several years ago he worked on a project where he tried to automate deployment of a technology that his company sold, and to do it using Ansible for virtual or Azure deployment. He had fun playing around with that. :-)
When it comes to the question, "Automate or not?", Dan is a strong proponent of automation wherever sufficient information is available to do so. Automation - when done well - brings consistency. It needs to be implemented carefully and monitored during and after rollout to catch unexpected behaviors and resolve them. Further, automation should leave enough of a trail to figure our what was done, why it was done, who was involved (if anyone - and it could be an app), and success/failure with applicable return status of each step.
Good automation will have rollback mechanisms wherever possible that can be used in the case of failure. Said rollback mechanisms may or may not be triggered automatically, but automating the "undoing" of what was done is as important as the automation of the forward direction. This helps ensure consistency, avoid buildup of "ghosted" resources, and helps people understand the teardown mechanism if it needs to be executed manually at another point in time.