September 16th, 2013
10:26 am

Posted by


Mrinal Jain

I remember the time, growing up and when computers were relatively new, that the boast was about the kind of computers that existed in the office, compared to the relatively simple “computer” at home. In essence, innovation took place on enterprise systems and with time this innovation flowed into “consumer” computers in the home.

Today, the computer technology available to most people in their personal lives appears far more innovative and more useful than any technology available in the workplace. This is because of some basic attributes distinguish personal computing technology today from enterprise technology; personal devices are integrated and easy to use, and offer the flexibility for a variety of tasks, while enterprise IT is very complicated, cobbled together by specialists at high cost, and is relatively inflexible.

The recent spate of “integrated” offerings from traditional IT vendors is in some ways a “catch up” with the much more vibrant evolution of computing in the consumer world.  The introduction by IBM, the company responsible for so much of the innovation in computing over decades, of a new category of systems called Expert Integrated Systems, is the beginning of a new era in computing that will alter the way enterprises buy, use and maintain computing technology assets. The system not only combines the various components that work well together in the system but also provides a powerful management paradigm and is modular “IT unit” that scales simply and to a very large extent.

We are moving from the time in which enterprises bought best of breed components and attempted to build their own IT infrastructure to an era in which they expect solutions that come integrated by design, include the best of breed practices for deployment and use gained from many decades of putting together enterprise IT, and which provide a simplified experience all the way from purchase through use and maintenance.

Most importantly, this new era of computing promises customers the ability to devote less time and resources to assembling and maintaining IT infrastructure, and more to running their businesses and delivering the best products and services to their customers.

Let us welcome this new era in which customers can use IT as a tool to run their businesses better.

Building Large Clouds Made Easy

Cloud Computing is a term that has been so overloaded so as to have no clear meaning any more. Yet, building IT clouds and making these available in the same way we use electricity is changing the place of IT infrastructure and, what businesses will be able to achieve by using computing as a service.

Let us briefly consider what the hardware units, that will make up the large cloud infrastructures of tomorrow, will look like.

Cloud Computing at a very simple level promises users a simplified experience of accessing and using computing, without requiring one to understand the parts of the solution, how it has been put together or in fact how it works. In effect, what is needed are easy to use devices that come ready to work, allowing the user to concentrate on getting work done rather than learning to use the resource. Cloud Computing also implies a responsive end user experience that is unchanged even as the need scales up and down.

Imagine a system that comes integrated as a single solution, combining the various components in a way that work well together at peak performance at all times, is easy to use and which scales automatically to provide a seamless end user experience. It is this concept that has driven the creation of a category of Expert Integrated Systems such as IBM PureSystems. These systems provide a highly flexible system, seamlessly combining the wide range of technologies available in the industry today, and provide a modular “cloud computing infrastructure block” that scales simply by addition of identical units.

As customers move to consuming computing as a utility, rather than building their own “power stations”, the “computing utility providers” will build capacity simply by building Cloud Computing infrastructure made up of large number of modules of this sort. IBM PureSystems offers these modules today to facilitate the construction of their large cloud services for the coming world.

Technorati Tags: , ,

Posted by

GK Abhilash
GK Abhilash

You have plans for your business growth. Plans, those give wings to your aspirations. These make you venture into uncertain challenges. And lot of efforts go in turning these challenges into growths. IT infrastructure is one such challenge when addressed on time with competitive solutions helps your business soar high. This is why, choosing the right IT solution becomes a priority. IBM System x comes into that space.

IBM System x is not just a hardware device. It is a complete server solution designed for any kind of ambition. Scalable and flexible to match your business demands, IBM x86 servers is also equipped with solutions that support your business plans. No matter what plan you have this single solution is ready to fit in.

The advantage lies in its architecture. Which is why, solutions like SAP HANA, Virtualisation, Virtual Desktop Infrastructure, Mailing & Collaboration, and Cloud run best on it. Being packaged with these solutions, IBM x86 servers helps you get the best of every solution. Precisely, it is the only solution designed to support your business ambition.

The very 5 Commandments for the perfect solution to drive any ambition:

Measurable Benefits: Benefits must be quantifiable to create a TCO/business case.

Best of Breed Components: Integrated best of breed components that work perfectly together.

Easy to Understand: The IT team should be clear on the benefits of the solution.

Easy to Manage: Should require only basic IT skills to operate after training.

Interoperable: Must be compatible with existing infrastructure and build on their efficiencies.

The IBM Design Advantage a System x server has:

Memory ProteXion: Provides very high memory availability that’s far cheaper than online sparing or memory mirroring. It allows customers’ sever to be more resilient in the face of error. Using industry standard components IBM manages to provide better resiliency which in turn reduces downtime and helps customer focus on their business and worry less about the hardware. It is ideal to run mission critical workloads without any hassles.

Calibrated Vectored Cooling: Datacentres maintain irresistibly low temperature, which also adds up to the cost. IBM servers with their honeycomb designs, designed for most efficient heat dissipation, allows the servers to function at higher temperatures without any problems. This allows datacentre managers to increase the temperature, saving a huge amount in power costs and a pleasant working environment.

Light Path Diagnostics: Light Path Diagnostics is a pop-out/drop-down panel with an LED for each major component — processor, memory, hard disk drives, adapter slots, etc. Another LED besides the specific component identifies the failed part. This saves precious time in determining which component has failed. And thus helps the customers rectify issues faster and get the business back on track. Imagine in today’s scenario, wherein customers are using more and more memory for needs like virtualisation, database, and other such objectives, IBM makes it simpler and precise to identify the error and work towards it.

Integrated Electronic Services: IBM Electronic Services integrates the IBM support community with your company to ensure that your IT environment is running with minimal disruption and maximum efficiency.

MAX5: A patented IBM architecture design which allows memory expansion without the need of adding CPUs leading to lesser consumption of power and significant reduction on license costs.

Virtualisation Manager: IBM Virtualisation Manager, working with IBM Director, allows you to manage physical and virtual machines from a single console. This simplifies management of VMware ESX Server, Microsoft® Virtual Server, Xen and IBM POWER™-based virtual server environments.

Predictive Failure Analysis: Should any component threaten to fail, IBM Predictive Failure Analysis® (PFA) will give a 48-hour advance warning on more components than any other system. It allows customer to pre-empt any failure and be ready. This allows customers to run the critical applications without downtime.

So, make plans for your business to grow with IBM System x servers. Click here, to read more. Watch the video here:

Technorati Tags: , , , , ,

Posted by

Rizwan Naikwadi
Rizwan Naikwadi

While IBM PureSystems continues to flourish and penetrate into the market, and leads the Converged Infrastructure Solutions, it is equally important to continue the trend of innovation that IBM as an organization is well known for.

In this blog, I would like to highlight the way in which IBM PureSystems complements the SAP Business Solution in a unique way.

Every organization today has been operating in silos, for example, they have an Infrastructure Team, Database Team, and an Application Team. These teams are important to manage the business needs. However, the biggest challenge today is the isolated way in which these teams operate. There by resulting into a slow, delayed, and non-agile type of landscape.

Customers today are aware of the potential integration that Flex System Manager (FSM) offers with Compute, Networking, Storage and Virtualization technology. On a similar concept, SAP Landscape Virtualization Manager (LVM) integrates all the SAP modules in a closed, integrated fashion. Now, because these two stacks offer an integrated view at their respective levels, it would be a best choice for any customer to leverage these two integrated modules complementing each other for an improved business value.

A SAP customer’s landscape consists of several SAP systems, each supporting a specific business application or dedicated to development and testing purposes. The most common SAP applications are combined as SAP Business Suite, which combines functions for ERP, CRM, SCM, BW, and much more. The SAP NetWeaver layer also provides abstraction of the SAP Business Suite towards hardware, operating system, and relational database systems. This enables the SAP Business Suite to run on top of almost all server platforms, which by all means are available on IBM PureFlex Systems.

The benefit of FSM integration with SAP LVM is to primarily build the bridge between the operational efficiencies of Hardware and SAP Administrators respectively. With such high level integration, the below results can be achieved:

Every business operation demands better response time, scalability and high-availability (HA) at all times. And SAP infrastructure predominantly requires HA features which can be easily made available on IBM PureFlex Systems.  The Production and Non-Production Systems can operate within the same set of compute nodes and still provide the required high-availability with non-disruptive failovers using logical partitioning and system pooling features on these systems.

Areas where customer needs a Business Copy of production environment for their Test and Development purpose, find it difficult to perform this copy during the peak or production business hours. IBM Tivoli Flash Copy Manager (FCM) with SAP LVM helps to perform this non-disruptive, seamless and application aware business copies using IBM Storage Flash Copy features.

The interconnection of business applications and global access to business services has made the management of the IT landscape more challenging. Manually operating and administrating a growing number of individual systems or system components is no longer an option. Here, the concepts of this integration not only promise to save costs but also to increase flexibility, elasticity, and automation of system operations to efficiently serve the needs of the business.

Technorati Tags: , , , , ,

Posted by

Badrinathan Jayaraman
Badrinathan Jayaraman

Data footprint reduction is the process of employing one or more techniques to store a given set of data in less storage space. By applying data footprint reduction methods, a business can reduce storage costs and improve the performance of the backup/restore process needed to protect vital data.

The amount of information that businesses must store is growing at an accelerating pace. Yet, there is a never-ending pressure to reduce IT costs and improve data backup/restore performance. Data footprint reduction technology uses compression techniques to eliminate redundant data and store more information in less storage space.

The idea behind compression is to identify commonly occurring patterns in data, represent those patterns using some type of short hand method, and then store the short hand version rather than the original data with the goal of saving storage space. Data compression is accomplished by applying a compression algorithm (i.e., a process of converting original data strings into shorter ones without losing information) to a set of data and storing the resulting compressed data rather than the original data.

Your variable amounts of data compression will depend on the characteristics of the data and the compression algorithm in use. Typically, a compression results for either production data or backup data range from 50 percent (i.e., a 2:1 compression ratio) to 80 percent (5:1). So if you have a 10 GB file to store and you achieve a 2:1 compression ratio, you can store that file in only 5 GB of physical storage space, representing significant savings. Since applying a compression algorithm takes processing power and time, some compression solutions can reduce overall system performance.

There are two ways of compression technologies available, one is inline compression which enables the data to be compressed before it gets stored in disks, the other one is traditional way(post process), where the data gets compressed post it is written on the disk.

Let’s take a look at what makes the IBM Real-time Compression algorithm different from other solutions. Traditional compression algorithms take a given data file, break it into small chunks, and run these chunks through the compression algorithm, resulting in a rigidly-structured, compressed file of variable size, depending on original file size and compressibility. Whenever the original data file is subsequently changed and resaved, everything in the compressed file after that change has to be recreated from scratch. This can impact overall system performance and result in compression ratios that degrade over time due to disk fragmentation, garbage collection, etc. And since de-duplication no longer recognizes anything downstream of the change, traditional compression undermines the effectiveness of the de-duplication process, which winds up rewriting data that hasn’t changed.

With IBM’s Real-time Compression algorithm, a stream of data is run through the algorithm until the algorithm is able to produce a chunk of fixed size, organized in a file of flexible structure. Whenever the data is updated, only the modified sections of the compressed file are changed and so the file size remains the same, maintaining compression levels and improving performance.

Apart from these, IBM Real-time Compression often improves overall I/O performance because the overall amount of data written to disk is reduced.

Watch the video on Real-time compression here:

Technorati Tags: , , , , ,

Posted by

Rizwan Naikwadi
Rizwan Naikwadi

Infrastructure Management is the most critical part of our time in the I.T arena today. We have to understand how our customers are meeting their IT demands, managing their IT infrastructure, and what more we can do for them.

As we all know, Cloud has always been in every customer “wish-list” and it is perceived that having it can be good yet expensive. Knowing all of this, Pure Systems are designed for the next generation platforms and we ensure that there is a lot of investment protection that brings a lot of value to our customers.

When we talk about how technology can help us consolidate our IT infrastructure using virtualization as a technology, it is very much in the interest of every IT organization to understand, how exactly these upcoming technologies help us reduce our real life operational costs?

While we continue talking about consolidation of our IT Infrastructure, the next big thing that comes into our mind is virtualization. The obvious reason is in our traditional approach, we find that our compute, networking, storage and other resources are never utilized to the fullest. This impacts our operational costs, where we pay for 100% and practically utilize nothing more than 70-75%. Virtualization helps you address these unforeseen problems and ensures efficient and complete utilization of our available IT resources. As most of our customers do only have partially virtualized IT infrastructure, they are unaware of complete virtualization and its benefits.

Often asked questions:

What do we mean by fully virtualized Infrastructure?

Virtualization, as we all know, is primarily required for efficient utilization of available resources. It is possible to virtualize almost anything. People are well-aware of virtualization in Compute and Network layers. What they don’t know is that even our storage layer can be fully and efficiently utilized. That would no less be than an icing on the cake.

Can there be any problem to have storage layer virtualized along with their associated compute and networking layers?

Reciprocating this doubt is our IBM PureSystems. It has smartly and very intelligently chosen IBM Storwize V7000 from IBM’s storage portfolio, thereby leveraging all its rich and advanced features that would furthermore benefit our customers to a great extent. Of all the rich features that IBM Storwize v7000 has, Virtualization of disks play a very important role in having complete utilization of overall storage capacity. This increases performance and ensures high availability at all times.

So with IBM PureFlex system, we can now have the entire infrastructure stack virtualized and optimized to deliver the best industry performance that any application or workload may require.
What would be the next step for any customer once they Consolidate and Optimize their IT Infrastructure?

Its time to Innovate and Accelerate!

Now that you have consolidated and optimized your IT Infrastructure, it’s time to understand if there is any complexities that you see, post your consolidation

Few of the possible complexities are as follows…

- Multiple Management consoles for every layer. (Server, network, storage, hypervisors etc.)
- Multiple Administration Experts to manage each of these layers.
- Separate SNMP and mail alerts at every layer to report problems if any.
- Managing and maintaining the inter-dependencies and interoperability between all these layers.

IBM Flex System Manager (FSM) is the answer to all of the above mentioned operational problems. It not only addresses the above problems but also ensures that every minute details that are essential for managing today’s IT environment is covered. It also provides you an overall consolidated report of your system performance, utilization, and availability at all times.

FSM provides a single pane of management for multiple compute nodes (Intel+Unix), networking, storage, multiple hypervisors, operating systems, consolidated service and support console, automated call logging facility, and more.

Apart from the above mentioned capabilities, IBM Flex System Manager comes with an in-built cloud functionality called “SmartCloud Entry” that can be further leveraged and used to accelerate, automate and simplify your daily IT process and operations. It brings automation to an extent where your operations reduce to around four clicks on the cloud management console.

IBM PureSystems are meant to provide you an entire Infrastructure stack with all the essentials as well as optionals that every customer would want to have. SmartCloud Entry is one such feature that can be easily deployed and used with IBM PureSystems, thereby taking your IT to the next level…

Technorati Tags: , , , , ,

Posted by


Mrinal Jain

The large amount of buzz around Cloud Computing has prompted many engineering colleges in India to begin a discussion around setting up a Cloud Center of Excellence within their campus. This appears to be an imperative, not only because these institutes need to train the workforce of tomorrow, but also because it promises to solve some of the resource constraints they face, as well as increase the level of collaboration they seek with other universities.

What aspects of cloud computing should students get practical experience with and how should colleges structure this? What is the minimum infrastructure needed to set up a “lab” in which students can get this experience?

Cloud Computing is a new paradigm which requires business and IT to get on the same page – to have a common agreement on a final objective and to then align the IT deployment to meet this end. In an academic context, many students dream of writing an application which they can host and build a web business around, it would be most useful for each of them to understand how to do so at every level; to in fact develop and host an application at “webscale” – and by getting basic experience with building each layer of this stack, concurrent with theory in a classroom, understand the tradeoffs at each layer. I believe that this is in fact the best experience – to learn by doing something many cloud professional do in the “real world.”

Colleges should include a project that runs in parallel with formal classroom theory. As part of such a project, each student is required to build and deploy from scratch an application, which could theoretically be hosted on a commercial cloud service, over the Internet, upon completion. This requires students to not only think through what the application would do, but also specify the audience they intend to target this for; think through the architecture of the middleware platform on which it would be built, and understand how the underlying infrastructure is “assembled” to support the higher levels of the stack. Within a project of this sort, students would need to think through issues that all cloud computing professionals face: robustness, availability, scalability and, of course, security.

The first objection I hear from institutions is the lack of resources. Which brings me to the next point – what is the minimum infrastructure required to get started in a lab of this sort.

Most bachelor’s level academic institutions in India already have some IT infrastructure in place, but this is often “outdated” and inadequate for training their students on the current aspects of technology.  Most times, the computer lab is limited to desktops with a small set of applications, and a low-end server that provides some networked applications. Oddly enough, the desktops, though not powerful enough to run required applications, tend to be underutilized.

The issue is that, like many corporates, academic institutes have also implemented their IT in silos. Labs are set up as special cases around a given course, or application. Colleges thus find (like many corporates) that they have underutilized infrastructure, and because each pool is tailored to a given task, inadequate resources for all tasks. What is needed is one common pool of resources (servers, storage and networking) that is adequate to meet the concurrent demand for a given set of students as needed, and flexible enough to quickly change between “workloads” (in this case the lab for each course). A Cloud deployment provides this through virtualization, standardization and self-service. Let us look at this briefly.

To create this common pool of resources, college “IT departments” need to adopt virtualization wholeheartedly. What this allows is to conglomeration of existing IT resources with newer resources as they are procured. Virtualization releases basic elements from their physical packaging – basic cores of compute become available across machines, basic banks of memory become available across servers, basic ports of networking across switches and basic bytes of storage across disks.  Virtualization (management) even allows a single view of physical resources across “boxes”.

To make this consolidated infrastructure usable by students, it is important to provide “packages” appropriate to the task students are engaging with. The packages should be set up appropriate to the stage in building the “application”; so, for example when students are setting up physical infrastructure, they think in terms of the appropriate “packages” of infrastructure (given a task, what is the appropriate amount of compute, memory, storage and networking to package); when setting up “packages” of middleware, they should think through the appropriate packages to host applications, to consider how the middleware should be set up to allow for scalability and availability, among other issues; and to think through delivery mechanisms when offering a web service (a web based software application).

To provide these “features”, a cloud management layer of software is required over the basic virtualized infrastructure. This not only enables management, administration and configuration of the virtualized resources, but also provides a self-service interface through which the created packages can be accessed and used by students.

Students are already familiar with what cloud computing looks like – thanks to their use of cloud services in social networking, email and online gaming. Understanding what goes into building a service of this sort, potentially not only motivates them, but also gives them experience with the actual tasks involved in being a cloud computing professional. And if this is done alongside the learning in theory, it provides a continuous thread around which each student can understand this exciting new field.

A paradigm shift like Cloud Computing is best understood by doing. Students should learn cloud computing by engaging in the process of setting up the pieces of a cloud infrastructure layer by layer, grappling with the issues at each level, keeping in mind the final aim of hosting a web application.

Technorati Tags: , , , , ,

Posted by

Jean Staten Healy
Jean Staten Healy

IBM PureFlex Systems hide complexity while also helping customers avoid virtualization vendor lock-in.

Hiding complexity – it has become the mantra of technology providers. As customers’ IT resources continue to be stretched, staffs are asked to do more, and budgets remain flat or growing slightly, the demand for systems and interfaces that make operations simpler is increasing.

We understand that – and with this customer requirement in mind, IBM created the PureFlex System. However, at IBM we still think that there are certain choices that should not be taken out of customers’ hands because it will limit their agility now – and in the future.

What is PureFlex? 

The PureFlex System is one part of the IBM PureSystems family of offerings that IBM announced back in April.  PureSystems offer clients an alternative to current enterprise computing models, where multiple and disparate systems require significant resources to set up and maintain. In particular, the PureFlex System enables organizations to more efficiently create and manage an infrastructure. In a sense, the PureFlex System provides the most basic set of compute elements – bringing together server, storage, and networking, as well as management and virtualization in one integrated offering. The result is that clients can start their deployment journey with much already done for them by the factory at IBM. With the built-in management node that we have in PureFlex, as well as how all the compute, storage and network components fit together, we are answering a clear need in the market.

Think of it as infrastructure ready to be used as a service that also includes with it an Infrastructure-as-a-Service (IaaS) private cloud management software, IBM SmartCloud Entry, so you can stand up an IaaS private cloud as well.

But just because we are packaging up the pieces for easy deployment, it does not mean that we are taking away choice and flexibility from customers to sculpt the system to fit individual needs and make adjustments later as needed. Far from it.

Deployment Choice with Room to Grow

PureFlex is designed to be deployed in a variety of sizes and scales so it can be used by large enterprises or mid-size customers. PureFlex comes in three flavors - Express, Standard, and Enterprise, which are starting points in terms of the size infrastructure customers want. 

  • Express is designed for small and medium businesses and is the lowest price entry point.
  • Standard is optimized for application servers with supporting storage and networking and is intended to support key ISV solutions.
  • Enterprise is targeted at scalable cloud deployments and has built-in redundancy to support critical applications and cloud services.

Of course, you can always add capacity and scale and grow. And there are mechanisms to pay as you grow, both from a hardware perspective and from the cloud or service aspect. Essentially, it is designed for and is targeted at a broad swath of customers – not just the mid-market or large enterprise. As the name implies, flexibility is in the PureFlex System’s DNA.

Choice of Hypervisor, Architecture, Operating Systems: Choice in the Same Platform

In terms of the virtualization environment that you can get within the PureFlex Systems, there is choice there as well. You certainly have the x86 virtualization environments, KVM, Microsoft Hyper-V, and VMware’s vSphere. The PureFlex line also includes compute nodes that are based on the IBM Power CPUs, so the virtualized environment in that case is based on PowerVM as the hypervisor.  The result is that you get multiple hypervisors, multiple CPU architectures, and also therefore multiple operating systems that are supported within the same platform.

We are hearing from clients, from analysts, and other sources that multiple x86 hypervisors are more frequently being deployed within the same data center.  In fact according to a recent study of 345 IT professionals by the Gabriel Consulting Group, almost half of the respondents said they were using two or three hypervisors, and 18% were using four or more hypervisors. “Hyperversity” as Gabriel put it, is increasingly the choice. With the mixture of hypervisors becoming more common, any platform that can enable that and provide a common user experience like PureFlex is designed to do, provides a lot of advantages. Increasingly, customers are rethinking what best suits their needs and their requirements.

Why KVM?

With PureFlex, each compute node uses a specific CPU architecture, and then on that CPU architecture, a specific hypervisor – but multiple CPU architecture and hypervisor choices can be inside a single chassis. That enables them to share the networking or the shared storage and also the management – both for hardware and also for virtual resources – that are in the chassis as well as that are in the Flex System Manager node which provides multi-chassis management.  Things that can be kept common are kept common and then per compute node you can have a different virtualization environment.

The appeal for KVM comes from its performancesecurity, and other advantages. For example, particularly in an integrated system such as PureSystem, where the complexity is hidden from the customer, we are able to integrate KVM more completely with the PureFlex system than we can for Hyper-V or vSphere. KVM is part of Linux and as a result IBM has access to the KVM source code, an IBM development team contributing to KVM, and a relationship with Red Hat that allows us to customize the build.

Our customers want the ability to change hypervisors. With PureFlex they can start out with one hypervisor and migrate to a different hypervisor without reconfiguring the system, including the management infrastructure.

This is relevant because although customers want the simplicity of an integrated system, they may need customization for particular workloads, not a cookie-cutter approach. PureFlex provides the ease of use that is required, but still enables choice on a range of levels to provide flexibility – now and in the future.

Jean Staten Healy

Director, Worldwide Linux and Open Virtualization, IBM

 

Technorati Tags: , , , , ,

November 2nd, 2012
9:05 am

Posted by

Rizwan Naikwadi
Rizwan Naikwadi

IT has been growing, emerging and evolving ever since its inception.  This has led to many ideas and innovations. Today, IT is known to the world as Information Technology. However, it is no more the same when it comes to an IT Manufacturing segment. Today, we happily term them as Integrated Technology (IT) — a new initiative to re-define IT.

So does that mean the IT Industry today is having a majority share in the Global Space?

What actually has changed at the ground level is the way in which these IT elements and resources are consumed, deployed, and leveraged. This fundamental change came into existence from the ever-increasing demand of the users to automate their daily operations.

The need for this rapid change in technology from Innovation to Smarter Computing and from Miniaturization to Automation has led to a new phenomenon called Integrated Systems or at times also known as Converged Systems by most of our customers.

What is an Expert Integrated System?

The time has come for a new way forward, one that combines the flexibility of general-purpose systems, the elasticity of cloud and the simplicity of an appliance tuned to the required workload. When expertise is integrated throughout your enterprise, the experience and economics of IT will fundamentally change.

Like, you can improve the productivity of your IT operations staff by up to 20%. Shift another 10% of your IT budget from systems-maintenance initiatives to revenue generating initiatives. Use systems designed to be up and running in hours, instead of days or weeks. Systems that require zero downtime in upgrading capacity and delivering system-wide life cycle maintenance.

Expert Integrated Systems are the building blocks of capability that represent the collective knowledge of thousands of deployments, established best practices, innovative thinking, and IT industry leadership. These are now well known as IBM PureSystems.

There are two types of Expert Integrated Systems or IBM PureSystems:

These expert integrated systems give freedom to IT professionals like you to use your valuable time and skills in your team, so that you can focus on innovation and growth. These integrated systems deliver expertise at different levels and to different roles throughout an organization—right from business leaders to data center managers. The platform system inherits the capabilities of the infrastructure. In other words, the expertise that is built into the infrastructure system can flow into the platform system, resulting in compounded benefits. This is an appliance, which can be highly customized and can be designed to optimally meet and cater to almost all the industry essential workloads.

A customer faces challenges because of the various reasons of manual integration, installation, tuning, optimization, application deployment, and testing, before implementation. This entire process in a traditional scenario would take approximately 90 days of effort, whereas in IBM PureSystems, the Pre-integrated Expert Systems can be available to use in less than 5 hours. Be it Infrastructure stack or an Application stack, IBM owns the entire stack. While the ownership lies entirely with IBM, the machine once ordered comes with pre-integrated and pre-configured patterns of expertise, which means the connectivity, compatibility, recommended firmware levels, and performance bottlenecks are all sorted even before it comes to you.

IBM, as always, has rightly understood the basic concern and challenges that every IT Manager is experiencing, today. Undoubtedly it refers to Operational and Maintenance cost to which IBM delivers the industry-leading technology performance. This also means that the systems designed today are capable of accommodating the future technology road map. This provides an immense investment protection to all our customers.

Hence, with such a product in place, customers need not worry anymore on having domain experts who just manage the infrastructure and maintain the decorum of the data center. They could perhaps think beyond their defined scope of work and work on the lines of improvising, innovating the existing deployed infrastructure, which would prove more beneficial and productive to any firm than just monitoring and maintaining them.

So with all the explored parameters, customers now understand that it’s time to change the game and IBM PureSystems is here to set the ground rules for them.

More details on how exactly the Operational cost is targeted at the administration level would be available in my next post. Stay Connected!

Click here to read more about IBM Expert Integrated Systems.

Click here to watch IBM PureSystems Videos.

 

 

 

Technorati Tags: , , , , ,

Posted by

Ravi Khattar
Ravi Khattar

In this new era of Smart Computing, data is being generated in larger volumes and at faster speeds. This certainly puts pressure on the Storage Solution deployed and used in the data centers, in order to maximize the IT investments for the organization. Organizations are demanding more efficiency for Storage Utilization, Storage Management, and Storage Deployment. These requirements are forcing the organizations to look at smarter ways to Use, Manage, and Deploy Storage Solutions. IBM’s Smarter Storage now has answers to all of these challenges and more.

The Smarter Storage journey begins with understanding your data. Generally, all the data that regularly get churned are stored. But, you have to understand not all of those are important. Segregating the primary data from the secondary data and storing them according to their priority is very essential. And instead of storing all the data on a single storage platform; with IBM Smarter Storage Solution, you can map data to different tiers of storage. The Smarter Storage is designed to improve the efficiency of utilization of the Storage; followed by more efficient Management; and Cloud deployment also helps managing the cost of Storage.

The prime attribute of IBM Smarter Storage Solution is that it is efficient by design. It has in-built and highly integrated storage technologies that helps it function on its own. Managing it is intuitive and easy; thus improving the efficiency of your most important asset, your people. Hence, you don’t have to worry about excessive space usage as you save on space requirements for active data by up to 80%. Moreover, you can save up to 47 % of administrator time and reduce complexity by up to 30 %.

Next to that comes its self-optimization ability. It analyzes data access patterns, adapts those, and improves performance accordingly. Sophisticated internal analytics automatically place data on the appropriate storage tier. As a result, your performance increases for up to three times.

IBM Smarter Storage Solution’s third attribute is its high agility on cloud. It is absolutely active on highly virtualized cloud environments. Both IBM and non-IBM storage are managed as one virtual storage pool. Data is automatically synchronized between facilities. So, data goes through a seamless transition and is safely stored without any disruption. It reduces disk space needs by up to 50%, speed storage deployment up to 26%, and improves application availability by up to 29%.

IBM has several storage solutions that give you incessant capability to manage your data. One of the most important developments of IBM Smarter Storage Solutions is the IBM Real-time Compression tool. It is a new capacity planning tool, which provides quantifiable savings estimate to assist you planning your storage budget. It helps you compress your primary data, so that you can save more and more data online. By shrinking primary data, all subsequent copies of that data, such as backups, archives, snapshots, and replicas are also compressed. Real-time Compression is the newest advancement in storage efficiency. It does everything without hampering the files.

IBM Smarter Storage Solution has several storage solutions under its umbrella. Some of the most recent inclusions are, IBM Storwize V7000, System Storage DS8000, IBM Scale Out Network Attached Storage (SONAS), and IBM Linear Tape File System Storage Manager.

So, pick from the IBM Smarter Storage Solution and equip your organization to face data explosion circumstances, without difficulty.

To know more, click here http://ibm.co/PBIclV.

Technorati Tags: , , , , ,

August 1st, 2012
10:51 am

Posted by

Kashish Karnick
Kashish Karnick

p>IBM PureFlex Systems Blog

Customers face many issues daily. But if I had to categorize two that are truly disruptive; those would be deployment and management.

About the Issues

When a business user comes up with a requirement, they typically would need an application or set of applications to make it work. That’s where IT comes in. Business users find it difficult to understand the complexity in the IT industry today. Technology – across the stack – changes every 3 months!

If you think about it, it takes 2-3 months to specify, design, and call in multiple vendors to propose solutions. Another 2 months to negotiate, and deliver; and then the work starts! Keep aside another 3 months for HW testing, integration, and user certification. Only after that the application goes live.

And that’s assuming Murphy’s Law doesn’t exist; which means further delays; that’s a minimum of 9 months lost in business responsiveness!

But that’s just part of the problem. The bigger headache comes in managing this setup. Integrating that into an existing setup is even a bigger problem. In fact, after speaking to many customers, I know that at least 70%-80% of the time goes in managing the existing infrastructure. The rest 20%-30% goes in getting projects up and running.

Big Blue Steps in…

3 years ago, we went back to our thought leaders, our experts to address these problems.

Start from scratch; build something new. At the same time, we needed to ensure that this system was OPEN and it’s able to work on a framework of Open Standards.

IBM today has 45 years of Virtualization experience. Infact, we invented virtualization, and the computing model we know today. We have millions of application deployments across the globe. We have more experience in managing IT complexities than our competitions.

So, you may ask how can we convert this experience into something tangible; translate it to business value for our customers and partners? Or, in short, what’s in it for me?

And that’s where these Expert Integrated Systems come into play.

Our thought leaders came up with an architecture that delivered just what you were looking for. Not only that, but they also defined processes that were the most efficient to carry out these tasks. Expert ways of doing something!

So what is Expertise? I could classify the expertise in 2 parts – one at an Application level, the other at an Infrastructure level.

Application Expertise – The most efficient and optimal method of Deploying and Managing an application
Infrastructure Expertise – The most efficient and optimal method of Managing an infrastructure.

The Value of IBM Experience

I’ll use an example of Infrastructure Expertise to illustrate what I meant by Expertise, which a system like this should have.

Doing a firmware upgrade on a server is quite complicated. Now imagine that server in a virtualized environment…the complications increase exponentially!

But don’t take my word for it; the table attached discusses some of the complexity involved in doing a firmware upgrade (in reverse order).

Let’s say there are 99 different methods of doing the above. But IBM’s experts know from all of their experience which is the MOST optimal.

So we’ll automate method number 67. That’s what Built-in Expertise means. Automating what experts do; in the best way possible.

But it’s not easy to do this.

Not without having access not only to the server layer, but one would also need access to the Storage, Network, and Hypervisor layers. To top it all, no one has access to the Source Code, of every part of the stack. IBM had every layer, except Networking. With the acquisition of BNT, which was a spinoff of Nortel, and had a 65% market share on embedded switches, that hurdle was cleared.

Now, that we have this entire stack, it becomes easy to Integrate and Automate various tasks.

  • We now have L3 network switches that are Virtual Machine aware
  • With code level access to KVM and PowerVM, and API access to HyperV and Vmware
  • With storage that is able to do both internal and external virtualization
  • With a  management console that is not only managing ALL of the layers, but also aware of the inter-relationships between the various stacks.

This is Integrated by Design.

So Built-in Expertise + Integrated by Design = results in a Simplified Experience for a user – both from a Deployment and Management perspective, which were the pain points we discussed.

And that’s what IBM PureSystems is about

A system that has all the Built in Expertise that 40+ years of IBM experience has to offer, built into a system that has been integrated by design, so that customers have a simplified experience.

The PureSystems Family is made up of two stacks: –

The PureApplication System – which contains all of the Expertise that IBM has to offer

The PureFlex System – which contains Management capabilities that simplifies Infrastructure

PureFlex System is a Subset of PureApplication System, as can be seen in the picture alongside.

The PureApplication Systems comes with the IBM Middleware stack which includes the WebSphere Application Server, Tivloi Management, DB2, and Rational tools.

But the real value of the PureApplication System lies in the Expert Patterns that have Process Automation built into the system. With capabilities that contains patterns like Web Application Code, Database Application Code, Data Mart Code, Workload Management and Metering /License Management. I know that’s a mouthful, but this stack the address complexity that comes in management.

Infact, these patterns allow us to create Virtual Appliances that are fine-tuned with the help of IBM Expertise; but also with the capability of bringing in Expertise from the whole IT community. Think of it like the apple iStore for Enterprise. You can visit IBM PureSystems Center for a list of 150+ global ISVs who have created appliances on the IBM PureSystems Family.

When I started this discussion, I stated there were 2 problems that IBM wanted to fix –
1. Deployment – The capabilities of using the Virtual Appliance model to deploy
expert patterns for applications reduces deployment from months to a matter
of days.
2. Management – Single Management console that recognize every layer of the infrastructure, as well as the relationships between them. This allows the system to automate most management tasks with ease.

And that, my dear readers, is the value behind PureSystems – the first Family in the Generation of Expert Integrated Systems from IBM.

Read more on IBM PureFlex Systems at http://ibm.co/NTubCz .

zp8497586rq
zp8497586rq
zp8497586rq

Technorati Tags: , , , , ,