- Today we will discuss the results of four major survey results from over 3,000 IT Pros throughout the year. Register: ow.ly/rrslt 32 minutes ago
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- April 2012
- March 2012
- December 2011
- November 2011
- October 2011
- March 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- Data Backup
- Data Recovery
- Disaster Recovery
- Disk Space
- Double-Take 101
- Double-Take Move
- Double-Take Share
- High Avaialibity
- IBM i
- iTERA Availability
- Job Log
- Job Management
- Job Scheduling
- Message Handling
- MIMIX Availability
- nateral disater
- Objective Management
- Physical File
- Professional Services
- Real-Time Replication
- Remote System
- Spool File
- Success Story
- System Performance
- User Space
Recently we wrote about the functionality wants and needs we heard from database administrators, and how we incorporated that feedback into the new Double-Take Share 5.2. Double-Take Share 5.2 is market-driven, which is not always the case with software design. Sometimes software functionality gets built in just because an engineer figured out how to program something interesting or cool. We don’t design software that way; instead we poll the market first to find out what users need, then we start programming.
Double-Take Share is a result of that process. You’ve heard us say that Double-Take Share makes it easy to capture, transform, enhance, and replicate data across multiple databases, operating systems and physical, virtual or cloud platforms. You’ve also heard us say that automated, real-time data sharing improves efficiency and decision making. But how does that work? Here’s a break down of Double-Take Share functionality that was requested by database administrators:
Database administrators need to share information all over the organization. Reliable, real-time replication between databases keeps data in sync and ensures data integrity. When you can share information in real time you get better queries, reports and business intelligence. And, because Double-Take Share supports all leading databases and operating systems, you can deliver your data to where it is needed, regardless of the platform.
Double-Take Share can take data from one database platform and transform it during replication for a different type of database. Now, you can modernize your legacy database by replicating it to a newer version of the same database, or a completely different database on a different platform. It’s easy. You can even consolidate databases. Double-Take Share’s collision monitoring and resolution will find and fix issues before they become problems. More and more organizations are subject to industry and government regulations. If your organization is regulated, Double-Take Share’s transaction auditing will help you stay compliant.
Then, everybody told us their LAN and WAN is already working about as hard as he can, so we made Double-Take Share LAN and WAN friendly.
As the data in storage continues to grow at the speed of light, IT departments are under increasing pressure to create intelligent reports and perform useful analysis for business intelligence. Double-Take Share helps administrators perform these tasks on real-time data, so reports are current and decisions can be made based on data that is current up-to-the minute.
One of the biggest problems that we hear over and over again is that organizations have edundant data on multiple systems, and they have no process in place for ensuring that it is consistent across these database silos. Double-Take Share solves this problem by keeping redundant data in sync across databases.
In the same vein, Double-Take Share also feeds current data to customer facing e-commerce or web portals so users have access to real-time information.
Of course Vision has always been known for having the most streamlined, robust migration solutions on the market, so we made sure that Double-Take Share lives up to that reputation. When you need to migrate, Double-Take Share will make the process fast, easy and no-risk. Best of all, you told us that nobody in your organization has time for a giant learning curve so we designed an easy graphical interface so you can start to reduce administration time, replace manual processes, and automate your data sharing right away. Best of all, it does all of this with no programming required. Stay tuned for a deep dive into the inner workings of Double-Take Share. If you are a database administrator, you’ll appreciate it.
Recently we wrote about how technologies tend to go through a market-driven evolution instead of engineers creating products just because they can. Double-Take Share 5.2 is a result of this process and we created it in direct response to what database administrators, business intelligence analysts and CIOs were telling us they needed in a massively changed landscape. We thought we’d share their requests – see if you recognize your own needs in any of these:
“We need to be able to share and integrate data between different databases, platforms, hardware and operating systems for a whole bunch of new reasons. And we need to be able to do it in real time.”
So many organizations have gone through M&As in the last few years. All of them ended up with a bunch of different database management systems and now they need to consolidate data for master data management or for dashboarding for executive decision support systems. Other organizations are just dealing with aging IT infrastructure that has evolved over time into a datacenter with different databases and platforms. These organizations said they needed to share data with supply chains or to a central location and satellite offices.
“We need real-time access to information from multiple databases for centralized reporting, Business Intelligence, data analytics, and master data management purposes.”
Data is growing at a truly stunning rate. IDC reports that data is growing at a rate “equivalent to every U.S. citizen writing 3 tweets per minute for 26,976 years.” They also predict that over the next decade “the number of servers managing the world’s data stores will grow by ten times.” IT departments see managing Big Data as a big hurdle, but leadership only sees big opportunities in the new ability to study trends and information on a massive scale. The need for real-time centralized reporting presents a whole new challenge for IT staff.
“We need to be able to separate the workload of day-to-day transactional use from supporting the organization’s BI business processes”
For example, today’s IT staff needs to be able to replicate data from an OLTP database tuned for transactional workloads to an OLAP database tailored for operational analytics; from the transactional database to data analytics database; and similar combinations. We call that “getting right horse for the course” so that you can ensure smooth data flow among different databases.
“We need to migrate several database sources to several targets at a quicker pace than ever before, and we can’t afford the risks we used to be able to tolerate with migration downtime and data loss.”
Increased M&A activity, virtualization, cloud adoption and aging legacy systems are major drivers behind increased migration activity. Migrations are commonplace these days, but the overall threshold for risk tolerance has dropped to almost nothing. The world has globalized, competition for market share is fierce, and new technology means everybody knows there no longer an excuse for losing data or being offline. Even for migrations that are aimed at making business faster and more stable.
“We need access to current eCommerce data from corporate databases 24/7.”
Global competition, distributed workforces and an always-on consumer culture means organizations need access to real-time eCommerce data around the clock. Without it, businesses risk losing customers to the competition and the competitive edge they’ve worked so hard for.
Did you see your organization’s challenges in any of these requests? If so, stick around – we’ll be telling you how we met all of these needs and more with the new Double-Take Share 5.2.
One of the interesting facets of being in the technology industry is watching business trends drive technical development. The cycle is always the same: Someone develops a technology for a certain purpose or to meet a specific need, people use it for what it was developed to do, then they start to think up new ways they can use it to meet other unfulfilled needs. The developer takes note of how people are actually using their product or technology and what the “new” need is, and then they make another iteration of the product that is purpose-built to meet the new need.
Years ago data sharing functionality just wasn’t on the radar. At the time, organizations had fairly homogeneous IT infrastructures and didn’t have to deal with the landslide of data we all have to manage and protect these days. Basic data sharing technology either did the trick or administrators cobbled together solutions on their own.
Then a funny thing happened. Merger & Acquisition (M&A) activity skyrocketed, especially in the middle markets. In 2011, middle market transactions made up about 47 percent of all target companies involved in M&A activity, or about 77 percent of the transaction dollars that year.*1 As a result thousands of organizations were scrambling to integrate their systems, share data quickly with people who needed it (both inside and outside the organization) and make sense of the incredible volume of data they now owned. Every day they struggled with their data, they lost more competitive advantage.
And, this was happening during a recession. So it wasn’t like the IT department could just hire more people or have someone write custom software. Nobody was throwing money at problems anymore. It follows that more than one DBA had a nervous breakdown trying to figure out how to move data, share data and extract some kind of business intelligence from all that new data. Software companies had no way of knowing that there was about to be a massive uptick in the need to share data across different platforms, operating systems and database types. Sharing technology existed, but it always wasn’t easy, it wasn’t fast and it the term “big data” wasn’t in the dictionary yet.
That’s the long story of how DBAs told Vision Solutions exactly what their problems were and exactly what they needed from data sharing software. Turns out nobody was adding headcount even after the recession ended, so administrators needed software that wouldn’t take up a bunch of time, because they didn’t have any time left in the day. They also needed an easy to manage technology that could share data over the vast number of technologies they’d inherited, and it had to be easy to use.
We listened to what IT departments all over the world were telling us, we put our heads down, and we came back with Double-Take Share 5.2. This is the data sharing solution that DBAs told us they wanted and it’s the data sharing solution that our early adopters are raving about. The new Double-Take Share was built with the big data trend in mind. It was built for administrators who are short on time. It was built for IT departments that need to migrate but stay online and productive, that need to share data quickly and without complications, that need to use the staff they have to solve all new business problems.
We’re excited about what Double-Take 5.2 can do for your organization. Stay tuned and we’ll give you the lowdown on all the new features and capabilities.
Recently a major healthcare insurance payer experienced major problems with their servers resulting from vendor application upgrades. Volume increased dramatically and the additional overhead had not been anticipated. Because they service so many customers, and are regulated by federal law, it was critical that they find an alternative solution – fast.
The customer decided to move the primary backup to a different system (known as a roleswap) as a temporary solution. However, a roleswap had not been tested on this system in the last three years. The customer called Vision CustomerCare for guidance. The CustomerCare Specialist set up a notification on the customer’s system that would alert CustomerCare if there was an issue in the process, then he let them know that he would be available for a few more hours and if they had any issues to please call him back personally.
The roleswap did produce a minor issue involving the MQSeries on the new backup. The customer called their Support Specialist, who was by now at end of his business working day. Instead of transitioning the issue to a new Specialist, he stayed to resolve the issue through WebEx.
“The Specialist stayed with our MQSeries replication until it was functioning properly and we were comfortable with the way it was working. We saw the value of the roleswap the first night when we were able to move processors to our production backup system, and keep processors on our production primary system,” the customer said.
Due to the nature of the applications and data housed on the primary systems, the customer could have been in a critical situation if they’d been unable to reduce the processing volume on their primary system. The solution, iTERA Availability 6.0, allowed the customer to follow through on their plan with minimal interruption to their end users, while providing a quick and reliable solution to the situation.
Get a product demo of iTERA Availability!
Thanks to everyone who attended the presentation.
You can re-watch the webcast from our On-Demand Library!
One of the world’s largest chemical manufactures with facilities all over the world contracts IBM to manage their IT infrastructure. Together, these enterprises have a long-term relationship with Vision Solutions, and we work closely with IBM if there’s a problem in their customer’s backup and recovery strategy. After all, if one of the world’s largest chemical companies were to lose critical servers in a disaster or outage, they’d risk a staggering amount of money in downtime costs, not to mention possible implications for production, distribution and non-compliance fines.
Recently IBM and Vision Professional Services conducted a failover test on more than 100 of their customer’s servers. This testing was designed to ensure the customer’s DR strategy and infrastructure would be fully operational in the event of an outage. During the test, two of the servers did not failover properly. John Merritt in Vision’s CustomerCare answered the call for technical help, identified the root cause of the issue and quickly got IBM back on track with their testing. The large-scale test concluded soon after and IBM, and the client, can rest assured that they’re fully protected from downtime and data loss.
Vision CustomerCare regularly works with organizations like IBM to make short and easy work of small and large scale DR testing. If there’s every a problem, our follow-the-sun CustomerCare will be there to help.
Got any upcoming projects such as moving to the cloud, migrating or sharing your data, or overall disaster recovery? Start with a product evaluation!
The end of the year is a great time to slow down. We’ve got another productive year in the bag, the holidays are nearing and most of us naturally feel a need to loosen the schedule and squeeze in as much time off as possible. After all, in just a few weeks another new year is going to take off at breakneck speed – and take us along for the ride.
Of course, business never really takes a holiday, so if you’re responsible for keeping an HA-protected datacenter up and running, you may feel like you’re never off the hook. Fortunately, the right technology strategy can help you feel confident enough to take the time off that you need, and actually keep you out of the office during vacation and holiday breaks.
Here are seven simple tips to help your IT team and company through the holidays:
- Determine what end-user expectations will be for service levels over the holidays. Roughly how many people will be hovering over their inboxes or catching up on paperwork?
- While it’s a struggle for many companies to have even one IT person in the datacenter on certain dates (depending on the maturity of the datacenter and level of regulatory oversight you fall under), try to have someone available with the skills needed to perform a successful failover.
- Use modern communications effectively to make sure that someone with the knowledge and authority to recover your systems receives critical system messages no matter where they are.
- Publish several copies of your DR Runbook and keep them handy. Write your DR Runbook to a thumb drive and give one to anyone in IT that might need a copy. Alternatively, here’s your chance to try Google Drive, or another free cloud-based storage offering.
- Test your high availability environment at least once a year. In the case of HA, it’s critical since URLs can change after a failover. You’ll need to have experience redirecting them when the time comes.
- Create a list that clarifies succession of responsibilities for members of the IT department. Who do I call first? If they’re not available, who’s up next?
- Create a list of high-stakes users and make sure you know how to reach them if something really serious happens. They’ll be happy you did.
If you follow these seven tips you’ll go into the holiday season with confidence, and come out with nothing worse than a few extra pounds around your waist and an unshakable addiction to fruitcake.
Oh, and by the way, if you need help from us over the holidays, we’ll be here. Seriously. Click here for a list of our CustomerCare contacts worldwide.
NCMEC helps find missing children and helps victims of child abduction and sexual exploitation, their families, and the professionals who serve them: In this organization, downtime costs are measured in risks and losses far greater than revenue or productivity.
In order to carry out their very important mission, NCMEC depends on their BlackBerry® Enterprise Server being available every single minute of every single day. When a child goes missing, constant communication and reliable data sharing with the public, law enforcement and the Department of Justice is vital.
“We needed a fail-proof disaster recovery solution,” said Steven Gelfound, Director of IT at NCMEC.
In an outage, NCMEC simply cannot afford the time it would take to rebuild data from scratch. NCMEC chose Double-Take Availability to replicate, monitor and failover their Exchange and Mimosa NearPoint environments. They worked with Vision experts to implement a failsafe solution in which they could have total confidence. Today, if a server fails for any reason, users are routed instantly to a backup server at another facility so they never lose a moment of access to their communication tools.
“NCMEC believes in doing all we can to protect children from child sexual exploitation and recover missing children. That philosophy works itself into all aspects of our business, including our business continuity and disaster recovery objectives,” said Gelfound. “Automating our recovery process with Double-Take Availability helps us to significantly decrease downtime and could have a direct impact on an active missing child case.”
Protect your data before it goes missing with a Double-Take Availability free trial!
Raymond James is one of the largest investment dealers in the world. If staff and customers can’t access critical data and applications, the business loses big money and customers lose confidence. Because Raymond James is regulated by industry policies and government legislation, they have to be able to recover critical data and applications within 48 hours of an outage or face stiff fines.
With an eye on their critical document management system and email servers, they wanted a reliable protection and recovery solution that would work across all of the company’s physical and virtualized environments that would be easy for their IT team to manage, and would allow them to recover quickly after an outage. They chose Double-Take Availability because it met all of their goals – and protects their VMware ESX guest machines in a complex, multi-server environment.
The IT team is happy to report that even though they were replicating 1.5 terabytes of data to a server more than 2,000 miles away, Double-Take Availability was installed and their data was protected quickly.
“This is the best product I’ve found for virtualized environments, hands down. There are a number of great licensing options, making it a low-cost solution. I haven’t seen any other solution that is as easy to use as Double-Take Availability. It allows me to do everything I’m doing today.” Eric Wright, Systems Architect, Raymond James Ltd.
Continue reading by downloading the full Raymond James case study.
Get the same strategic advantages as Mr. Wright with a free trial of Double-Take Availability!
Ask your questions to our experts and peers about Double-Take Availability in our open forums.
Leave a comment below to let us know what you’d like to see next on our blog.