Tuesday, 18 July 2017

Software-Defined Data Center (SDDC) – Solutions for Modern Apps in the Cloud

Members of the IT industry are all talking about the new concept of a software-defined data center (SDDC). But many wonder, how is this different than a traditional data center? Is it indeed revolutionary?

Some think the SDDC is an extension of existing physical assets, but at its core, it frees the application layer from the physical infrastructure layer. This allows for a wide range of uses including deploying, managing, storing, computing and networking various business applications in the cloud.

Here we’re examining the software-defined data center industry and identifying what makes SDDC solutions truly revolutionary.

Application Layer Focus with Customization and Automation for Business Applications

So, what’s the real difference between this data center and hardware at the office? The software-defined data center frees the application from the hardware layer. Computing as we know it is about to make a quantum leap to data centers that can live in multiple physical locations.

The software-defined data center is a unified data center platform that helps transform the way companies deliver IT with automation, efficiency, and flexibility. It’s built for the cloud and geared toward modern applications. This platform is ideal for businesses interested in modernizing without the expense of a physical overhaul.

True to Its Name – Defined in Software

The software-defined data center is exactly what it states – defined in software. Deployment, management of applications, and virtualizedcomputing storage and networks exist as software.

Allow someone else to own the hardware, guards, gas, batteries, generators, and employ the hundreds of people to service them. Software-defined data centers can be pursued now and deliver a return on investment one application at a time – not one physical data center at a time.

Companies Migrate Specific Applications

Organizations aren’t migrating entire data centers to the cloud. They migrate specific applications that collectively perform a business function. Using the application as a target allows IT departments to get instant ROI when they deploy and migrate their first cloud application.

A more agile approach uses cloud containers to avoid involving existing physical assets and infrastructure. SDDC architecture allows for containers with business systems of record and systems of engagement within the context of integration and security. This allows companies to use virtualization principles of pooling, abstraction, and automation so it’s cloud-ready.

No More Data Center Limitations

Limitations to SDDCs previously included physical constraints and a lack of application-layer focus. Companies were leery of sending applications to the cloud without proper context for governance, security, and integration.

While cloud computing saves money, is more efficient, and has more raw computing power, containers take it a step further by allowing IT teams to migrate, deploy, and control apps within the cloud.

The SDDC of the Future

The software-defined data center is a proper solution for cloud customization and integration. Companies can use the growing number of virtualized services, networks, and APIs to grow their data centers beyond their physical walls one cloud container at a time.

The SDDCs of the future will offer a cross-cloud connection with cloud-deployed data centers, existing data centers, and all virtualized elements in between. To make SDDC solutions a reality for companies, certain features must be provided and certain objectives met.

A true software-defined data center will be:
·         Adaptive
·         Automated
·         Holistic
·         Resilient
·         Standardized

SDDC solutions will contain features including:
·         File system virtualization
·         Image automation
·         Network virtualization
·         Topology automation
·         Topology centric services


These are key to customizing, automating, and controlling application-focused features. Then companies can securely transition to the cloud and use SDDC architecture to innovate with greater utilization, resiliency, and cost savings on a common platform. 

Thursday, 6 July 2017

7 IT Consulting Rules to Master

IT consulting gurus understand the business isn’t all about golfing and happy hours. Expert IT consulting services involve thankless work, unexpected detours, and changing goals.

However, as an IT consultant or IT economist, the job can be fulfilling as you improve your client’s bottom line. While there’s no exact guidebook for IT consulting, here we’re gathering a few pro tips and ideas to enhance your IT consulting services and gain the respect of clients.

1.    The Client Defines Success – Not You

Each project brings its own IT consulting hurdles and challenges to deliver the right solution on time and on budget. But keep in mind, with each contract you sign, the client is the hero and they define success.

As a consultant, your role is a mentor. The goal is guiding your clients to ensure they succeed on their terms. You may be held to a definition of success made by multiple parties involved in the project – the IT manager, the production manager, inventory specialists, CFO, and more. Every person may have a different goal in mind.

2.    Actively Listen to All Vantage Points

Each player in a project has an issue to be solved. Actively listen to understand the role and environment of each person.

At the same time, be discerning. Some feedback may be speculation and fantasy. Be aware, but don’t dwell on them. Take note of unusual or unexpected issues that come up. These could double as a problem and an opportunity.

3.    Respect Your Client’s Privacy and Reputation

Never mention your client’s name to others without permission. They may not want other groups, including their competitors, knowing they work with you. Talking about other clients who use your IT consulting services can sometimes come back to haunt you.

4.    Keep Up the Momentum

Certain stages of a project may be technical and require more time to absorb. Give yourself and the client time, but be mindful of momentum.

Don’t wait too long to connect. Take smaller steps if needed to keep moving in the right direction.

5.    Maintain Great Communication

Connections stay strong through effective and responsive communication. Don’t waste anyone’s time with unnecessary meetings and don’t waste a client’s time with unnecessary requests. Above all, don’t waste your own time doing nonproductive activities, even when requested by the client.

6.    Always Be Truthful

Speak the truth, even when they don’t want to hear it. The truth should be a window of opportunity, not a hammer.

While no one wants to receive bad news, especially if it results from their actions, the news still needs to be delivered. Remember, it’s not your job to shield your client from unpleasant facts.

7.    Turn the Other Cheek

Regardless of if the project ends as planned or unexpectedly, be grateful for the trust and work received. Make sure the client has everything they need to continue without you and always keep a professional and helpful demeanor.

Looking for professional IT consulting services? We can help. Get in touch with Advanced Systems Group to learn more.



Sunday, 25 June 2017

New Secure Data Storage Solutions Are Circling Above You

You’ve probably heard in recent years about the historic wake of data breaches exposing highly-sensitive data for millions of people. Because of this, secure data storage and transferring have become bigger priorities for organizations.

From the IT department to the boardroom, all eyes are on secure data management teams to determine the best ways to protect critical data in physical data centers and cloud data storage.

While it’s common to use shared hosting facilities, organizations face two major risks:
1.      The risk of exposing critical data
2.      Challenges associated with legal hazards

Companies of any size can be subject to a leaky internet and leased lines. As organizations move from legacy systems to agile software solutions, a paradigm shift in how we store, access, and archive sensitive data is occurring.

An Alternative Solution Is Needed

Current methods of data storage and sharing aren’t meeting customer and market demands for security. Secure data storage and management are needed to mitigate exposure and protect sensitive data from hijacking, theft, and more.

But first, we need to understand two reasons why a secure data storage alternative is needed:
1.      Cloud Threats. Cloud environments run across hybrid (public and private) networks with IT controls can’t stand up to real-time cyber security threats. Sensitive data may be monitored by a government agency or exposed through unauthorized access to company computers, passwords, and storage on public and private networks.
2.      Difficult Jurisdictions. Diplomatic privacy rules are under review by governments to restrict cross-jurisdictional access and transfer of personal a corporate data belonging to citizens. This means organizations must have separate data centers in each jurisdiction which can be financially difficult for mid-size companies.

Changing Our Perceptions of Data Storage

For better data security, a space-based cloud storage network could provide private and government organizations with an independent cloud infrastructure platform. This would isolate and protect sensitive data from the outside world.

An organization’s data can be stored and distributed to a private data vault designed for secure cloud storage networking without exposure to leased lines and/or the internet. The architecture is an innovative way to reliably store data and protect it against hijacking, cyber attack, sabotage, and even natural disasters.

In an era where technology changes each day, it won’t be long before organizations, governments, militaries, and more turn to satellites for secure data management of highly-sensitive material, videos, drone audio, and feeds from authorized personnel in remote locations.

The Storage Model of Tomorrow

Many companies have relied upon internet and cloud data storage because these were the only options available. As technology continues to evolve, organizations are watching for new solutions to protect and secure sensitive data better than traditional infrastructures. Satellite storage may now solve their problems. 

At Advanced Systems Group, we help organizations with secure data management to ensure your data is protected. Get in touch to learn more about how we can help.


Saturday, 10 June 2017

Is Your Cloud Consultant Losing It?

In recent years, cloud implementation consultants are acting like they’re losing their minds. If you think their actions or stupidity won’t affect your and your project, you’re wrong.

Unless you do system configuration and implementation work internally, you depend on your cloud consulting representative to complete projects. If they make mistakes, your entire project could be at risk. Or, if your project is completed correctly but they can’t make a profit, they may eventually bail on you. This means you must do your homework when working with technology consulting and cloud consulting companies.

While we’re dependent upon proper operation in competitive markets, participants typically don’t do well in unhealthy markets. You’ll see it first through desperation in vendors and it shows itself later as failed projects. Even though the market for cloud consulting services is booming, the market isn’t healthy.

The cloud consulting market is facing many challenges. Here’s why:
  • New vendors around the world are trying to compete and blowing the bottom out of the market. Using low-end vendors typically means more monitoring and management for clients. You may never consider these vendors, but they’re still taking margins out of the business for everyone.
  • Since cloud vendors are innovating constantly, technology platforms are more complicated. It’s common to need to learn a new language or toolkit every year. This means fewer consultants are fully competent and up-to-speed on the latest.
  • Clients aren’t doing their homework or dedicating enough resources to the project. In fact, they don’t really know what they need. Even without requirements, they’re getting fixed-price bids before initial discovery takes place. These situations often lead to lawsuits.
  • Clients don’t treat their cloud computing software as a long-term corporate asset and aren’t taking their project seriously. Keep in mind, even if you don’t control the hardware and operating environment, it’s still IT.
  • Many believe the myth that cloud technology consulting projects should cost around “1x the annual license revenue.” If you consider the annual license revenues of most cloud vendors and compare it to on-premises software, you may expect the multiple to be much larger. This price expectation is unrealistic and dangerous.

So, What Does This Mean?


You may think, “Great! More competitors mean a lower price AND they’re battling to improve systems and products.” The problem is when everyone in the cloud starts making bad decisions, others will follow suit and no one wins.

Cloud consulting has taken a turn in the last few years. The following are a few observations:
  • Clients are focused on price, not the value it brings to their business. With very little trust and a higher emphasis on micromanagement, this creates a bad situation for all parties involved.
  • The rates of winning the bidding wars have lowered. This means vendors are lobbing out proposals without giving them much thought. While the project cost has risen, opportunities for project innovation has declined. This results in one-size-fits-all solutions and lower client satisfaction.
  • Sales reps are quick to say, “Yes,” before knowing what the client truly needs and if the technology available can meet their demands. The sales rep simply makes the sale but the issues are noticed later as the project begins. These late changes can result in a project costing upwards of three times what was originally quoted or huge change orders (including cancellations or lawsuits).
  • Clients are quick to use resources from overseas with little thought. Not only is distance a factor, but so are time zones, business culture, and language barriers. This can lead to big problems later in a project.

Our Best Advice? Take Your Time


These issues can be alleviated by taking your time as your research vendors. Be thorough in your final selection for technology consulting. Spend a little money on the initial discovery and analysis with a consultant before your project begins and screen consultants thoroughly.

If you don’t trust your consultant to make good decisions for your company and deliver high-quality results, you’re already headed the wrong direction. You should cut ties and exit the relationship as early as possible.

Saturday, 20 May 2017

8 Steps to Creating an Efficient IT Disaster Recovery Plan

National disasters aren’t the only things that can cause widespread outages and damage. In Kenya, millions of homes and business were without electricity thanks to a monkey.

Your business isn’t safe based on your geographic location alone. Various threats can destroy data and ruin an organization. That’s why it’s important for all companies to have a solid IT disaster recovery plan.

In the event of an emergency or disaster, make sure your systems, personnel, and data are well-protected by following these disaster recovery solution guidelines.

1.    Keep an Inventory of all Hardware and Software

Include in your DR plan a complete inventory of all company hardware and software based on priority. Next to each name, list the vendor contact information and phone number for technical support.

2.    Decide What You’re Comfortable with for Downtime and Loss of Data

After listing your hardware and software, the framework for your DR plan starts here. If you’re an electrician, you could probably keep your business going for a while without technology or servers. But if you’re Amazon, you’re limited to mere seconds of downtime. Knowing where your business falls on the spectrum will help you decide which disaster recovery solution you need.

Take your list of company software and applications and rank them into three tiers.
·         Tier 1 = systems you can’t do business without and need immediately
·         Tier 2 = applications or systems you need within 8-12 hours, even up to 24 hours
·         Tier 3 = applications you could survive without for a couple days

Defining your applications and systems helps you prioritize and improve the speed and success of recovery. Be sure to test your plan at least twice a year and update your tiers based on the results.

3.    Identify Who Is Responsible and for What

Your DR plan should clearly define roles and responsibilities including designating a person responsible for declaring a disaster. Defined roles make disaster recovery tasks easier to manage when all parties are familiar with their responsibilities. This is even more critical when working with third party vendors. When everyone is on the same page, the DR process works as efficiently as possible.

Protocols for an effective DR plan should include who to contact, how to contact them, and in what order they should be contacted to get systems up and running. Create a contact list with all DR personnel to include the details of their position, role, responsibilities, and contact information. Also, consider putting a succession plan in place with trained back-ups in case someone is on vacation or leaves the company.

4.    Establish a Communication Strategy

Good communication plans are often overlooked, but incredibly valuable. If disaster strikes, how will you communicate with employees? Do they know how to access the systems they need to do their job? During a disaster, phone or email may be down so alternative methods of communication should be identified.

A good plan also includes initial communication when a disaster takes place and ongoing updates to keep everyone informed. Clear communication is essential in managing IT disaster recovery with timely updates sent to employees, suppliers, and vendors. A written communication process can help lead to action and align organizations, employees, and partners.

To keep your customers up-to-date in the event of an emergency, publish a statement on your website and social media platforms. Offer prompt status updates showing you’re aware of the situation and working to take care of it.

5.    Tell Employees Where to Go in an Emergency

A good DR plan doesn’t just back-up your technology. It protects your employees and keeps your team operational. In the event your primary office isn’t available, select an alternate site for employees to work. Make sure employees know where to go and how to access systems from the new site. Offer a clear map of the alternative site including where to sit.

Keep in mind any compliance regulations and contract dedicated workspace where employees and company data remain private. Contract a large enough space with seats for all employees needed to meet recovery requirements.

6.    Check if Your Service-Level Agreements Include Disasters or Emergencies

If you store systems in a data center or outsource technology, have a binding agreement with the provider which defines the level of service you’ll receive if a disaster takes place. The agreement may include the timeframe to get systems back up and running.

7.    Outline How to Manage Sensitive Data

Outline IT disaster recovery technical procedures to make sure sensitive data is protected. Address how the information is maintained and accessed when your DR plan is activated.

8.    Test Your DR Plan Often

Backups and systems may fail you, the internet connection may be too slow, and a key employee may have changed her cell number. The only way to find out if a plan works is to test it.

Define how your DR environment will be tested including the methods and frequency. Infrequent testing often leads to DR environments that don’t work as planned. Create a testing schedule that measures your recovery time objective (RTO) and recovery point objective (RPO) goals to validate they can be met. The more comprehensive the testing, the more successful your company will be at surviving a disaster. If a DR test fails, identify the issues and fix problems early so you’re ready for any crisis.

Don’t forget to test key DR employees. They should be well-versed in the plan and their role in completing tasks. Simulated disasters and drills give your staff the experience and confidence to execute the plan if an actual disaster occurs.


Avoid being caught off-guard when a disaster strikes. Map out disaster recovery solutions and put them to the test so your business can handle any challenge. 

Thursday, 11 May 2017

Cloud Computing: More Than Just Infrastructure

Cloud computing infrastructure, including private cloud infrastructure, set a whole new standard for infrastructure expectations. Until recently, computing infrastructure took weeks, months, or years. Much of this depended upon project priority, budgeting, and the availability of staff.

Today, cloud computing infrastructure is measured within minutes. Thanks to AWS, Azure, and Google Cloud, you can go from initial cloud setup to running virtual servers in under 10 minutes.

Both developers and applications groups are thrilled about cloud computing infrastructure innovations offering faster application development and deployment. Quality also improves because infrastructure rationing is no longer necessary. The new speed and accuracy allow development teams to create fast infrastructures to try new initiatives and take them down quickly if they don’t work.

Fast infrastructure capabilities are great, but cloud computing offers even more. Infrastructure is just the foundation of cloud computing with more value just waiting to be discovered. Other cloud computing services can speed up software development which pushes the lifecycle process into overdrive.

Evolution in Cloud Computing Thought

Drew Firment of Capital One shares the evolution in thought surrounding cloud computing in his blog. Cloud computing is leading to an IT revolution. It requires companies to rethink how they deliver value to their customers and reminds them that their customers don’t care about their private cloud infrastructure. They’re more concerned about the customer experience.

Firment points out many IT organizations and employees fail to understand a customer’s indifference to the company’s computing includes everything except the value experienced by the customer.

He also notes about the DevOps toolchain many IT organizations are creating to make their application pipeline as fast as the infrastructure:
·         Unique combinations of CI/CD tools used throughout an organization are counterproductive and unnecessary. While most enterprise DevOps continuous delivery pipelines work locally, they are detrimental to the well-being of the whole system.
·         Similar stories play out between the DevOps pipeline and the commoditization of infrastructure into a compute grid by AWS. Thanks to CodeBuild, a newer addition to AWS Developer Tools, you have fewer reasons to roll your own pipeline.

This means IT organizations should examine how they go from the concept idea of an application to the delivery of functionality to a user in the fastest and cheapest way possible. Any factor that doesn’t differentiate the parent company in the marketplace is ripe for replacement by a low-cost provider, typically a scale cloud provider.

This allows IT organizations to focus their budget and efforts on features that offer their customers unique functionality. Though adopting this idea means IT organizations must understand cloud computing as something that’s more than just fast and cheap infrastructure. It’s a full set of computing services for building applications quickly. Even more, every service an application group leverages becomes part of an application the group isn’t responsible for. This off-loading of responsibility allows groups to place their focus and efforts on customer value.

While IT organizations spend so much time obtaining and managing private cloud infrastructure, it’s challenging to recognize how cloud computing changes assumptions with delivering applications. Firment reminds IT organizations to change their assumptions from managing computing resources to recognize they’re delivering customer value.


In addition, IT organizations need to understand this requires evaluating their entire value chain and focusing solely on what they can deliver. This means passing the responsibility of the computing stack by as much as possible and utilizing staff resources for company-specific functionality.   

Friday, 20 January 2017

7 Cloud Computing Solutions and Data Center Trends You Should Know

Late in 2016, the Cisco Global Cloud Index Forecast released interesting cloud computing solutions and data center trends for 2015 to 2020. The following predictions may interest you:
  •         Public cloud growth is expected to increase by 35% per year, with private cloud computing increasing at 15% per year
  •        Cloud data center traffic is expected to grow by 262% and exceed 14 zettabytes
  •         By 2019, 55% of the consumer internet population (2 billion people) will use personal cloud storage
  •         By 2020, 92% of all data center traffic will be cloud workloads

If these predictions are accurate, data center demands and public and private cloud computing solutions will experience increasing growth and changes in infrastructure by 2020.

Here’s a recap of Cisco’s forecast with the top seven data center and cloud networking trends:

1.      High growth in global data center relevance and traffic. This includes hyper-scale data centers and a 3x increase in data center IP traffic. Hyper-scale data center operators will nearly double the number of data centers by 2020 to meet high traffic demands. This will account for up to 47% of installed data center servers, 83% of the public cloud server installed base, and 86% of public cloud workloads.

2.      Continued global data center virtualization. Migrating workloads from traditional data centers to cloud data centers will be easier thanks to the increase in virtualization within the cloud. This offers more efficient workloads – a necessary requirement to meet the growth demand of cloud services.

3.      IaaS, PaaS, and SaaS workloads will gain traction from 2015 to 2020. Cisco forecasts IaaS (Infrastructure as a Service) will grow at nearly 17% each year, PaaS (Platform as a Service) at 24% each year, and SaaS (Software as a Service) at 30% each year through 2020.

4.      By 2020, public cloud workloads will surpass traditional and private data centers across all applications except for ERP, collaboration, database/analytics/IoT, and other business compute applications.

5.      Data center storage capacity will increase 5x between 2015 and 2020. By 2020, there will be 1.8 ZB with 88% of it being cloud based.

6.      IoE (Internet of Everything) will impact data centers. IoE and cloud services could generate nearly 600 ZB of data by 2020, up from 145 ZB in 2015. While not much of the data will be stored, it will be used. Data may best be serviced with Edge or Fog computing.

7.      Delivering improved security while improving customer experience is necessary. Companies will continue to be concerned about storing data in the cloud. When moving to the cloud from traditional data centers, it’s necessary to focus on latency and network performance to deliver optimal cloud benefits.

Cisco’s top seven predictions pinpoint the need for continued improvement in network capabilities to support advanced cloud applications. In addition, meeting consumer and organizational expectations.

For complete details on all data center trends and the importance of cloud computing solutions, download the full PDF:



Need help with public or private cloud computing solutions for your business? We can help. Get in touch to learn more.