Join David Linthicum for an in-depth discussion in this video High-performance data transfer, part of Cloud Architecture: Advanced Concepts.
- [Instructor] Another advanced concept as dealing with cloud competing architectures is the notion of high performance data transfer. So cloud is a bit of a trade off because as we put processes and data out on the public loud, the ability to transfer data between clouds and the clouds, between clouds and the enterprises, is quickly becoming more important. And so as architects we need to consider how we're going to approach that problem. And there are options for you. There are things like Snowball which is AWUS's way of actually loading up data onto a device and then you can ship the data directly to AWUS.
There's the ability to leverage leased circuits. There's all kinds of things that are options for you to solve this issue. But, understanding how to approach it I think, is the first step. So high-speed data transfer typically is between enterprises of the clouds legacy systems. And so, we may have a data center that's traditional enterprise data center, that's been out there for years. And it has a leased circuit between that data center and the enterprise. Now we added a public cloud, say Amazon. And now we added another public cloud, say Microsoft Azure.
And we have to figure out as we move information and in between these systems how to do so in a cost-effective but high performing way. And that becomes kind of the need for the architects to reach out to the network experts to figure out the best way to do that. Now on the public cloud side, they do offer you, with the ability most public clouds. With the ability to leverage a leased circuit or a dedicated circuit directly from the public cloud provider. To the enterprise. And obviously, the value of that is you're going to have priority in terms of packets.
You're not going to send things over the open Internet. There's security issues with that. But, many cases, it's going to be latency issues. It's just not as fast as if you're leveraging a dedicated circuit. And in between you're enterprising your public cloud provider. But there's also the instance of cloud to cloud. And some instances we're going to have our AWUS applications which are communicating with data that's going to exist on Microsoft Azure. And, their ability to have backclaims and various systems, which exist today. You need to look at that in terms of the capability. So if you you are for whatever reason going to be hosting data on one public cloud and the processes on the other public cloud you need some sort of a high-speed way for those things to communicate one to another.
You need to consider that in terms of potential latency that could arise from building and configuring that thing. Like I said, the public cloud providers do provide backclaim services. They do provide dedicated circuits. They do provide other things out there to allow you to move information much faster in between various source and target systems that may exist on remote systems that aren't owned by the individual cloud providers or the enterprise. But you need to look at those, you need to understand those. And I would also suggest you test those, actually push data back and forth.
Do some barometer readings. Do some performance testing, things like that. Very important.
- Microservices and containers
- Complex, disturbed, serverless, and composite architectures
- DevOps integration
- High-performance solutions