Posted by 36681 at 10:36 AM 0 comments
Computer backup involves the process of storing data in a location apart from your hard drive, on any possible medium in order to ensure that you always have a copy of that file. We all know that damage can come to a computer resulting in a loss of data, and having proper backups can greatly ease the pressure off of us in the event that such a thing does occur. If you have a loss of data on your machine for any reason, you can easily restore these files through the use of your previously made backups.
Zen is an ancient Asian philosophical system which arose in China and then traveled to Japan. It is a combination of the teachings of Siddhartha Gotama, the Buddha (enlightened one) from India and the Taoist philosophies which had grown up in China from the teachings of Lao Tzu and the Tao Te Ching.
One might wonder how Zen could play any part in the modern technological world of computers, especially such a seemingly mundane task as computer backup. However, zen can play a great part in computer backup. Zen is the art of becoming one with that which you are doing. It is a form of meditation that puts you in a state where you are sure of exactly what is going on around you. It is a way of life, and can become a part of all aspects of your life, especially something as essential as computer backup.
When you are backing up your computer, do not just consider it a mundane task that must be performed. Allow yourself to become one with your data, and with your computer. There are many different methods of computer backup, and you can practice zen with any different method of computer backup.
Traditionally, floppy disks were used in the performance of computer backup. However, floppy disks are becoming increasingly obsolete. After all, a floppy disk can only hold 1.4 megabytes, while a CD-R can hold 800 megabytes. It is easy to see why such a method would be seen as ineffective.
Zen is all about effectiveness, and living properly. While computer backup is a very important thing, and a practice that should be performed often, it merits us nothing to take much more time and disks to backup on floppy disks than to backup on a CD-ROM, DVD-ROM or other medium which holds a great deal more data.
Another form of backup that is becoming very popular is the use of what are called key drives. Key drives are tiny drives which can fit on your key chain but can still hold up to a gigabyte or more of data. You then plug these drives into your computer, and on most newer machines your key drive will automatically be read by your computer, without the need for any device drivers, the perfect conception of plug and play technology.
Oooommmmmm....
Posted by 36681 at 10:36 AM 0 comments
Posted by 36681 at 10:36 AM 0 comments
Posted by 36681 at 10:36 AM 0 comments
Posted by 36681 at 10:36 AM 0 comments
-by Steve Carl, Senior Technologist, R&D Support
BMC is one of the largest VMware shops anywhere. We have nearly 9000 Virtual Machines running in our ESX server farms alone. Our growth trajectory will have us break 10, 000 VM's before the end of the summer. The is just VMware, which is not the only virtual player in our shop. We are even bigger users of virtualization than that, with the granddaddy-of-all-virtualization VM on the mainframe, Virtualbox, Parallels, AIX LPARS, Sun LDOMs, HP's VSE (not to be confused with IBM's DOS/VSE...) and so forth.
Not all that long ago, our worldwide "real" server count for R&D was a large number: well north of 10, 000 real, physical computers. BMC grew, more products came online: entire product categories even... and the real hardware footprint has shrunk to about half what it was three years ago. Ditto the data center space. The current R&D DC move I am working on has us taking over 7000 jam packed square feet down to 5000 square feet... and leaving room to absorb another 1000 square foot lab later. In this one lab, we have leveraged virtualization to more than halve the number of real servers.
Converting real, physical machines to the virtual world (P2V) of older gear is part of that virtual growth, but so are new requests for new environments. Think of those latter as "Real Server Avoidance". The impact is huge in terms of BMC becoming, among other things, a greener company. It is not just real coffee mugs in the kitchen (rather than Styrofoam cups) and recycling the Diet Dr Pepper cans (Contrary to popular belief, not all software development is powered by Mountain Dew.). We use less power than we did before. Much less power.
Conservatively speaking, if we used 100 less watts per virtualized server than for a real computer doing the same things, then that alone would be 900, 000 watts! 900k watts here, 900k watts there, pretty soon we are talking real carbon footprint reduction. 100 watts is a very lowball figure: Even new computers with high efficiency power supplies like a Dell 1950 use well more the 200 watts at static, post booted load. The 1950's power supply is max rated (Nameplate rating) at 670 watts. Depending on your local code, when planning a data center, it is assumed that somewhere between 40% and 60% of the Nameplate rating is the amount of wattage used once the server has settled down after booting.
At some point I'll probably develop a tighter number than 100 watts per server savings, but it will do for now. I'll have to go add up the real wattages of every server we decommissioned, and then add up the wattages of the all the virtual servers in order to get a better estimate, and that would take a while, given the number of servers we are talking about here!
I talked about this saving power with virtualization topic a while back ("Virtually Greener") and in that post I was focused only on what we had done in Houston. This is company wide, and clearly we have come pretty far down the road from where we were only 1.5 years ago. Here is what I noted about Houston's power savings back then:
"That means 80 Kilowatts or 80, 000 Watts have "left the building". 80 KW reduction is 160 pounds of CO2 reduction each and every hour they are off (assuming Coal as the power feedstock). 3, 840 pounds per day. 1, 401, 600 pounds per year. Half those numbers for natural gas as the power generation feedstock"
So, using those same numbers, and expanding the scope from Houston to all of the R&D data centers world wide, we are now talking about 11.25 times those amounts. 5625 pounds of CO2 an hour, 135, 000 pounds of CO2 a day, and 49, 275, 000 pounds of CO2 a year that we are now *not* adding to our shared atmosphere. Remember that is *low* because of the estimate: the numbers are really better than that. Maybe twice as good even.
Call me corny, but this makes me happy. I am visiting our corporate headquarters in Houston as I write this, and it is early in April. Just barely Spring. It is hot. I am glad we are doing what we can to not make it hotter.
P2C
With all this virtualization, and the addition of Bladelogic to our corporate tool chest, we have created quite a change in our internal R&D compute capabilities. We have a compute cloud. We have gone Physical to Cloud (P2C: TM) . While
.... add all these things together and put them in the service in fewer, more regionally consolidated data centers connected with point to point network clouds and you pretty much have, by any definition of Cloud Computing, an internal Compute Cloud. One with more OS images than before, more capabilities than before, faster turn around than ever, and that is using far less of our planets shared resources.
In my list above I noted some of the common Cloud Computing building blocks. In particular, I think the key enablers for the Cloud concept are Virtualization and Provisioning. You could reasonably argue that neither are required: That is is about having a computing resource available via the network only, and I would not argue with you. That in fact has been an underlying theme of my last few posts.A good example of a Computing Cloud that is not virtualized is the recent information we just got about how Google designs their data centers. Fascinating stuff. No virtualization is sight, but clearly a Compute Cloud.
Virtualization and Provisioning are tools that make delivery faster. Make availability easier. In point of fact, you would not need many of the things on my list to build a cloud, as long as you were keeping the operations fairly small.
The bigger it gets (ignoring cases like Google where a single task scales beyond the size of a single computer), the more important each of those tools become, and if you are planning ahead, you will be ready with the tool set *before* you actually need it. Performance and Capacity Planning is a great example of tool that gets more important as the virtual server farm goes. I was recently using our BPA / CME tools set to create standard configurations for our next set of server purchases, for example. I am ready with data from BPA to show that we need to put more memory into our configs than we have to date. When you are talking about a server farm with hundreds of servers, even if they replaced thousands of servers and you are already saving serious CO2 emissions and expense money, it is still a serious investment.
The other thing one has to be careful of when building one of these Compute Clouds is virtual server sprawl of course. When it is cheap and easy to deploy new vservers to meet new requirements, the tendency is to leave servers running till someone tells you they don't need it anymore. More often than not, no one will tell you that. Everyone is looking at the next project, not the last one. One does not want to undo the goodness of P2C by having way more server farm than current plus part of peak plus growth demands.
The Linux Connection
I have not felt particularly constrained recently to keep my blog just about Linux. Partly this is because as a technologist my role is much wider than just Linux. Partly it is because my biggest project recently has been designing, building, and getting ready to consolidate five R&D data centers down into one, smaller data center.
This does not mean "Adventures" will never have Linux stuff in it again. In fact, the next post I am planning is pretty much pure Linux, with an update about where MAPI is in Evolution.
The other part of it is just that Linux is not something people think about anymore and ask "Will it make it?". Redhat's last quarter alone should be proof of that. Linux is ubiqutous. It's at the core of VMware. It's embedded in the lights out management cards. its in the netbooks, fast boot BIOS's and the SaaS bits of Cloud Computing. It is where virtualization is often first developed, and first deployed. It is the core of supercomputing, The "L" in the LAMP stack, which provides so much of the Internet. It is seriously challanging OS.X in the Smartphone market. It is making inroads in Real-Time, where VMS has been king for so long. No one ever asks me if BMC supports Linux anymore. They just assume we do.
It's everywhere and now we are starting to just assume its presence. My wife, long a hold out because of her love of OS.X, runs Ubuntu 9.04 on her Dell Mini 9 even. It is everywhere, and in every thing. The question becomes not "Should we run this on Linux?" but "Is there any reason *not* to run this on Linux?"
At some point, "Adventures" is going to probably be, at least in part, about finding Linux in all the places it is hiding around us.
The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Posted by 36681 at 10:36 AM 0 comments