Thursday, August 30, 2018

Key Benefits Of A Fully Managed Wi-Fi Solution

The rapid rise of new disruptive technology trends - cloud, social media and mobility -has added a new dimension to business operations. Connectivity is now the most critical factor for running a competitive business. How will businesses support these devices? Turning to cloud-based Wi-Fi providers is one way. With an outsourced wireless solution, you can overcome many of the challenges of providing access to an ever-increasing number of mobile devices while keeping pace with wireless technology advances.

Wi-Fi providers that offer a hosted, cloud-based WLAN serve a wide range of businesses. Migrating to an outsourced model for wireless network management makes sense for any company that needs to provide wireless. Let's understand a few benefits of a fully managed Wi-Fi solution:

Geographically Dispersed Locations

The traditional wireless network uses a physical hardware controller to direct access points. However, if your business operations are distributed across locations, the traditional controller-based configuration is less desirable. In a cloud-based model, wireless LAN services enable plug-and-play capability for devices across all locations.

Ease of Business

The first and the most noticeable benefit of outsourcing WLAN management is the hassle-free access to connectivity. The IT team no longer needs to deal with the stress of network downtime and constant demands for accessibility from users spread across the company.

The entire WLAN ecosystem is managed by the technology partner, who takes care of all the Wi-Fi requirements of the company - from analyzing enterprise requirements, designing and installing a customized WLAN system to day-to-day management and operations of the system.

Risk management

Mitigating risk is important in all areas of business, and managing a Wi-Fi network should be no different. All businesses are concerned with addressing consumer privacy; however, there are unique considerations that must be addressed as they relate to the Internet. Consumer privacy expectations are an important topic in today's news. It is considered a best practice in implementing public Wi-Fi, so that a user must agree to a Terms of Service (ToS) that usually incorporates an Acceptable Use Policy (AUP). This opens them to a serious risk.

Customer engagement

Managed Wi-Fi offers business owners another way to reach their customers. Videos, promotions, surveys, and other types of content can be displayed on the page customers see when they connect to the network, serving as a digital engagement tool for a business to deliver messages. One of the biggest benefits a business receives by offering public Wi-Fi is the potential to develop deeper relationships with customers and the capacity to improve its understanding of customer interests.

Complete visibility

With a fully managed wireless network, companies gain an unprecedented level of visibility and control over the entire network. A cloud based centralized WLAN monitoring dashboard provides deep visibility of the entire WLAN infrastructure and provides application-level and user level control. Managers can control Wi-Fi usage by app category and make quick decisions with weekly or monthly reports.

Dynamic scalability

Managed WLAN services have the added advantage of being highly scalable. Organizations can rely on rapid provisioning and deployment of additional WLAN nodes to match sudden workload spikes.

IT Staffing Constraints

Many companies, especially small- and medium-sized businesses, have insufficient in-house IT resources to manage robust wireless networks. Yet, all businesses must embrace mobility to remain competitive. Service providers can provide the solution you need with access to advanced technology, tools and expertise. If you don't have sufficient in-house IT resources, you're still able to leverage critical mobility capabilities. If you do have in-house IT staff, you can refocus their efforts on other projects.

Growing Operations

A major benefit of working with service providers is the ability to scale your WLAN solution as your business grows. When network demands increase, your managed WLAN solution can scale accordingly. The ability to scale is especially critical for organizations that can't always predict demand, such as when internal staff and outside visitors require network resources for their own devices. Examples include educational institutions, health care facilities, hospitality companies, retail operations and any organization trying to manage unpredictable demand.

The last few years have seen the corporate work culture changing drastically. Employees no longer sit for hours in front of their desktops; mobile devices and laptops allow people to move freely and work from anywhere in the building. This dynamic work culture is perfect for enhancing teamwork and nurturing innovation and an enhanced WLAN ecosystem is an essential component of this new work environment.

By Saumya Sinha

Labels: , , , ,

Monday, August 27, 2018

More On....Meeting Your Bandwidth Requirements For Supply Chain Management Applications

As I pointed out in a previous article, Supply Chain Management (SCM) is a complex animal. The key to a successful SCM implementation is a clear understanding of the business objectives and business requirements of the company the SCM primarily supports. This often includes a number of legacy systems which need to be integrated into the solution. From this will come the technical objectives to be met and the technical requirements that frame the solution. Only then will the commmunication requirements for bandwidth capacity, reliability, resiliancy, latency, security, and expandability be meaningful.

Here's just 2 such technical aspects.....

Frame Relay

Frame relay initially had several advantages over the alternative solutions for SCM and other multi site and multi company communications networks.

The first advantage was with circuit costs. For a multi site network, the traditional approach was a large number of point to point circuits. Each circuit required a router port, a CSU, and often a circuit monitoring module. With milage based pricing, each circuit represented a significant recurring cost on top of the initial hardware costs. Router sizing was often a factor of ports supported rather than performance capability.

Frame relay exchanged the point to point circuit costs with an access circuit, typically at less than 1/10th of the cost. With port speeds from DS0 to DS3, multiple sites could be connected with a single port at each site. A partial or full mesh, even with full redundancy, could be accomplished with very few router ports and CSU at each site. This represented significant capital savings.

Using fractional T1 and T3 on the access circuits, frame relay made expanding capacity between sites relatively painless. Port changes within the frame relay provider's network was often a configuration change. Expanding the actual circuits was typically a configuration change on the CSU and DACS.

Adding new sites was often accomplished with physical changes at the new site only. The new PVC across the frame relay network and at the existing site(s) was a configuration change. Depending on the routers used and the routing protocol implemented, this might be accomplished without a maintenance window.

The PVC approach allowed for additional security. A given location could be directed to a specific port within the DMZ, limiting the exposure of one's own network to other vendors within the SCM network. Firewalls at each end allowed each company to control its own security. The frame relay network was vulnerable to external monitoring at very few points, and the relationship of PVC traffic to specific customer required specific network design information.

Frame relay offered the ability to have a disaster recovery site support multiple locations. PVC between the disaster location and other locations could be defined in the configuration, allowing dynamic implementation of the disaster recovery network.

As a circuit protocol, frame relay functions independent of other protocols. This segmentation allowed IPX, IP, SNA, and other system communications protocols to be implemented over the same paths. If desired, each of these could have its own PVC and bandwidth, or they could all operate over a common path. Finally, the bandwidth and performance could be established specifically to site pairs on a PVC basis.

For a vendor that participated in multiple SCM networks, frame relay represented real cost savings. Instead of a new circuit for each network, a PVC could be established. Instead of 6 week circuit installation delays, service could be established in hours.

----

So why the past tense? The advantages of frame relay are now achieved via the Internet. The timeframes for implementation have been reduced from hours to minutes. Encryption has advanced beyond the security offered by isolated paths. Advances in application based routing can achieve availablity assurances. Legacy protocols have been largely replaced by IP.

There are still times when frame relay is the best choice based on business requirements or technical constraints. But a robust bandwidth network (e.g. OC3 or OC12 bandwidth....perhaps with GigE connectivity) applying IP protocols will enable a seemless flow of information without risking security concerns.

Emerging Technologies

The most notable is Radio Frequency Identification, or RFID. RFID tags are essentially barcodes on steroids. Whereas barcodes only identify the product, RFID tags can tell what the product is, where it has been, when it expires, whatever information someone wishes to program it with. RFID technology is going to generate mountains of data about the location of pallets, cases, cartons, totes and individual products in the supply chain. It's going to produce oceans of information about when and where merchandise is manufactured, picked, packed and shipped. It's going to create rivers of numbers telling retailers about the expiration dates of their perishable items—numbers that will have to be stored, transmitted in real-time and shared with warehouse management, inventory management, financial and other enterprise systems. In other words, it is going to have a really big impact.

Another benefit of RFIDs is that, unlike barcodes, RFID tags can be read automatically by electronic readers. Imagine a truck carrying a container full of widgets entering a shipping terminal in China. If the container is equipped with an RFID tag, and the terminal has an RFID sensor network, that container’s whereabouts can be automatically sent to Widget Co. without the truck ever slowing down. It has the potential to add a substantial amount of visibility into the extended supply chain.

Right now the two biggest hurdles to widespread RFID adoption are the cost of building the infrastructure and the lack of agreed-upon industry standards. But regardless...RFID implementation will be bandwidth intensive to retrieve and disseminate the mountain of information such a tool will provide.

Summary

The answer to how to meet bandwidth requirements for SCM applications is as complex as ever. The addition of emerging technologies like RFID into the mix of legacy point-to-point approaches, the frame relay darling, and the simplification afforded by OCx backed IP protocols....means your IT staff will be pegging their stress meter trying to make a decision. To navigate the aspect involving researching and acquiring the right bandwidth solution....do yourself a favor. Use the services of an independent unbiased consultant such as FreedomFire Communications to navigate the minefield for you. Your IT staff will love you for it.

Labels: , , ,

Wednesday, August 22, 2018

Fog - - Bringing Cloud Computing Down To Earth

Many business executives are probably wondering why their IT staff has such a sudden interest in the weather. On a sunny afternoon, they're equally perplexed by all the references to clouds. In this perspective, we'll help you decipher what they're talking about, why you should care, and how you should proceed.

What is the Cloud?

First the "Cloud" is not a thing; it's a method of delivering IT computing services. There is no new device called the "Cloud". But, a cloud delivered solution does have a physical presence; it sits on hardware and runs software just like the servers sitting in your data center today. The difference is how it's done as much as where it's done.

Cloud computing relies on the virtualization of servers (taking a physical server and subdividing it into a number of virtual servers which each operate independently on the same physical device), storage virtualization through SAN's (storage area networks), and networking (internet, VPN, LAN, WAN). These technologies are combined and engineered to then provide a solution that provides the experience of independent, physical infrastructure.

Second, while there are a variety of experts and providers talking about Cloud Computing, you're not going to find a single definition. One man's cloud is another's hosted server farm. We're not going to claim our definitions are the final answer, but they are consistent with what you will find in the marketplace today and where it is likely to be moving toward.

In our view, Cloud computing comes in a few flavors; Public Clouds, Private Clouds, and Multi-tenant Clouds.

Public Clouds have been with us for a while. If you use Gmail, Google Applications, Hotmail, or host your website with Microsoft (as UPi does), you're using Cloud computing. The hardware and applications are residing someplace else and you're sharing those resources with a lot of other people. You connect (typically over the internet) and make use of these services at dramatically lower cost than you could ever hope to duplicate them yourself. Think if we all had to run our own Exchange servers for email or keep technicians on staff to support a small business website, not practical.

Private Clouds are the other end of the spectrum. A business has a group of servers and business applications that its various departments, operating units and sites all connect to. These connections can be over the internet, VPN, local area networks, or wide area networks, it doesn't matter. The computing resources are centralized and everybody in that business is using them. It also doesn't matter where these servers are, they can be in a company's data center, at a co-location data center owned by somebody else, or even running on hardware provided by a 3rd party in that company's data center. What makes the Cloud the Cloud in this case, versus just a central data center, is the nuances of how the servers, storage and networks are configured and interconnected to achieve the benefits of Cloud computing.

Multi-tenant Clouds are the hybrid answer and where we will truly start so see something radically different from how we've traditionally thought of IT. In this Multi-tenant environment, you're sharing resources, but think of it more as a car pool than a public bus. The Multi-tenant Cloud provider subdivides the Cloud resources amongst its customers. This subdivision method varies by provider and is a key consideration when a business goes looking for these services. All the customers on the Cloud are sharing the big pool of resources but with "fences" and "swim lanes" and other methods of security and control, customer data and processing is separated and protected.

Lastly, you may also hear talk of Infrastructure as a Service (IaaS). Cloud computing is one type of IaaS. A hosted server model in which the provider owns the equipment, hosts it at their data center, and provides the services to manage the infrastructure would also fall under the IaaS umbrella. The pricing model, availability, and flexibility would be different than a Cloud, but its still infrastructure provided as a service vs. physically delivered and owned or leased by you.

Why do I care?

Now that we've confused you and you're reaching for the aspirin bottle, let's take a second to say why a business should care about this. The answer is three fold: Availability, Flexibility and Cost.

A properly engineered Cloud solution is highly available. Individual servers share the load with other servers and if one fails the other one picks up the slack. In theory, you would never experience any down time due to server failure so long as the Cloud itself still exists (remember the Cloud does reside on physical devices and can be destroyed or incapacitated just like any physical thing). Similarly, data storage is configured using RAID (not the bug spray) technologies that allow for redundancy of the data so that a single hardware failure on the storage device won't bring your business to a halt. This is really cool, but really complicated stuff that requires very experienced technical engineers to design, configure, build and maintain. In other words, don't try this at home unless you have a very talented staff.

Now, we just said the Cloud is highly available unless something happens to the Cloud. This is not meant to be an oxymoron. The Cloud lives on servers and sits in a data center some place. That data center could become unusable; fire, weather destruction, extended power loss, loss of connectivity. If that occurs, your highly available Cloud isn't so available. In this case, you either need a traditional disaster recovery solution, or a provider who offers a more robust solution such as data replication to a second Cloud in a second (distant) data center. With a replicated solution, you could quickly (think an hour or less not days) bring your systems back up on the second Cloud with limited data loss from the point the first Cloud went puff.

Along with Availability, Cloud computing can provide a high degree of Flexibility. Since the Cloud is a pool of resources, a business could spin up new servers in minutes not days. No need to go acquire a new piece of server hardware, wait for delivery, install it in a rack, connect it to your network, and load your system. In a Cloud environment, you should be able to just create the new server environment on the already existing physical infrastructure in a matter of minutes. Similarly, if you only need the environment for a short period of time, say a test environment for some project, when you're done you can just turn it off and go back to using what you need without having an expensive asset sitting there unused. For businesses with wide swings in processing demands due to seasonality, new product launch or other business drivers, this flexibility can be very effective. Turn on the new servers for the peak holiday season, then turn them off in January and quit carrying that cost.

This then leads us to the third benefit, Cost. We already talked about the cost saving from not having to have infrastructure sitting around for peak seasonal demand, but even without a seasonal demand driver, studies have shown that as much as 80% of available server capacity sits idol at any point in time. That means that on average your business has a huge amount of capacity (and investment) doing nothing most of the day/week/month/year.

With virtualization technologies, you can squeeze some of this excess capacity out by simply sharing the servers among your own applications. A Private Cloud would achieve the same result except that you would have a 3rd party providing the platform to your business with possibly greater efficiency and effectiveness than you might be able to achieve with a limited in-house IT organization. Multi-tenant Cloud computing takes it one step further by allowing that sharing to be among multiple enterprises.

Ultimately, the true nirvana state will be when you pay for only what you use, a "Utility" model. The industry isn't quite there yet so any Cloud you get today will have some excess built into it, but the direction is clear.

The end result is lower capital cost, lower software and maintenance costs, and lower operating costs. Add these direct cost savings to the intangible savings associated with high availability and less business disruption from unplanned outages, and the business case for moving to a Cloud environment can be compelling.

How do I get there?

At the beginning of this Perspective, we said that there wasn't a single definition of the Cloud. One man's Cloud is another's hosted solution. Therefore the trick in moving to the Cloud is really determining if that's what you're getting or is it just a more sophisticated hosted services solution. Not to say the latter is bad, just that the key thing to understand is what you're buying.

Therefore, ask these key questions:

1. What is/are the unit(s) of measure that I would be billed for?

2. How do you determine how many I need to start with?

3. How do you/I determine if the amount is sufficient for my needs now and as time goes on and my business changes?

4. In what increments can I obtain additional capacity as my business grows?

5. How often can I add/subtract capacity and what is the lead time?

6. What redundancy for the Cloud is offered and how do I assess what's right for my business?

The answers to these questions will help you quickly identify if you're looking at a Cloud.

In addition to determining the real service being offered, you'll need to do the same due diligence as any traditional hosting services contract; review the providers processes for managing the service, visit the data centers and ensure they are adequately configured/secured, review the providers service level agreements, and check references closely. Because of the shared nature of the Cloud, you'll also need to delve a little deeper into a few areas; security and data protection, roles and responsibilities for management of the various layers of technology, and technology refresh and advancement approach and commitment. Finally, since you'll be moving from where you are to a new environment, you'll need a good explanation and understanding of the migration process; approach, checks and balances, time frames, your labor commitment, and costs. To move an enterprise of any size is going to be a complex undertaking, make sure you and your provider have a firm understanding of what's entailed.

Summary:

To recap, in this Perspective we provided a definition for the Cloud and outlined three types (Public, Private and Multi-tenant). We talked about the advantages of moving to a Cloud environment and the key things to consider in moving in that direction.

As we said, the concept of the Cloud is really quite simple but the underlying technologies and their integration is quite complex. To create a truly Multi-tenant environment with all of the protections and security found on independent servers and the assurances of performance, availability, and redundancy required is a very complex undertaking. Selecting the right provider with the expertise to do this is ultimately the key to achieving the promised benefits of the Cloud.


Mr. Urban is the Founder and Managing Partner of UPi (Urban Partners, Inc.). UPi is dedicated to maximizing the success of all involved parties by creating and utilizing a collaborative environment between our team members and clients and within our communities. Leveraging the experience of our team in the service of our clients, we bring to mid-size businesses and organizations the deep business and IT knowledge and experience typically only available to much larger enterprises. We provide the level of experience that can be found in the large consulting firms to the small and midsized organization. UPi's mission is to provide senior executive level experience on, and only on, an as needed basis to its clients and make its charges proportional to the value delivered. With over 25 years of business management experience, Gary has delivered high value to his broad range of companies across multiple industries. He has held executive positions with global consulting organizations (Accenture and Capgemini) and multi-national corporations, including the position of VP IT for Ryder Transportation Services. In addition, he has established UPi and been a partner in another startup business that was later sold to a global company. During his career he has worked with companies in a variety of industries. His experience includes work on strategic business planning, IT management, outsource services management and delivery, business process design, operations strategy, and general project management. Gary holds a Bachelor of Science in Business and a Masters of Business Administration from the University of Florida.

Labels: , , , ,

Friday, August 17, 2018

Meeting Your Bandwidth Requirements For Supply Chain Management Applications

In today's business world it is critical for companies to deploy supply-chain management (SCM) systems to enhance efficiency across the product lifecycle by streamlining procurement, production, fulfillment, and distribution processes. Deploying an SCM solution that provides the intended return on investment requires that the applications, servers, and enterprise network infrastructure work together seamlessly. This is easier said than done and will necessitate a thorough evaluation of your bandwidth needs to meet the demand.

SCM solutions require integration of applications and data across multiple geographically dispersed supply chain partners, as well as internal integration with legacy systems. To ensure success, your organization must deploy robust, end-to-end dedicated bandwidth that delivers highly reliable and strictly monitored QoS (Quality of Service).

An SCM solution is only as strong as the weakest link in the chain. Access to SCM applications and data must be guaranteed for all of your users, inside and outside the enterprise. Your company must provide sufficient bandwidth to support constant data flow between desktops and servers at the company headquarters, geographically dispersed suppliers and partners, manufacturers, distributors, customer service call centers, and for mobile users and teleworkers. Connections between servers and desktops must provide the necessary bandwidth to deliver resource-intensive services, real-time application data to all users, and enable integration of disparate data sources.

At your headquarters office, where corporate Web, application, and database servers reside and WAN links converge, availability and security are key. A redundant backbone switching architecture with Gigabit Ethernet connectivity to servers and access switches is often indicated, along with a modular, enterprise-class routing platform that supports advanced security features and WAN bandwidth management.

In order to ensure availability over time, a successful SCM solution should be built on an application design, server architecture, and network infrastructure that can grow easily as your business grows. This is called scalability. The solution must provide the ability to easily provision more WAN bandwidth to meet peak needs, to scale with fluctuating traffic between vendors and partners, and to adapt quickly as supply chain partners are added or replaced. To accomplish this, the solution should readily accommodate new server connections, partners, and locations. Network routers should provide enough capacity to easily and economically provision additional bandwidth as traffic increases, or to add new locations as the geographic reach of the supply chain expands.

Each location involved in your SCM infrastructure will require dedicated bandwidth to meet the functions conducted at that location. This likely will involve some combination of the following choices and is dependent on the complexity of the deployed SCM system and the size of your organization:

- DS3 bandwidth, also known as a T3, is the reliable, all-purpose, digital connection for extremely high-volume requirements. Operating at 45 Mbps (equivalent to 28 DS1 circuits, or 672 DS0 channels), DS3 can provide a cost-effective solution for smaller locations in the SCM network. With DS3, you can link your high-volume host computers for resource sharing and load balancing.

- OC3 bandwidth is a fiber optic line delivering 155 Mbps (equivalent to 3 DS3 circuits) designed for those who expect constant, high bandwidth requirements. For a mid to large size business implementing a SCM system....this will likely be your choice for infrastructure backbone (e.g. headquarters) bandwidth.

- Gigabit Ethernet is a version of Ethernet, which supports data transfer rates of 1 Gigabit (1,000 megabits) per second. Large scale deployment of SCM systems and larger organizations will likely consider this solution.

The process to determine and than find the appropriate bandwidth solution for your SCM application can be a daunting task. Use of an unbiased professional bandwidth broker will save your IT staff countless hours of effort and headaches while guiding them through the technology minefields towards the best choice for system reliability and cost. I strongly suggest you take advantage of their expertise.

Labels: , , ,

Tuesday, August 14, 2018

How To Get Maximum ROI From Cloud Deployment

Enterprises constantly strive to increase their performance and reduce operating costs while maintaining a high quality of service. Many organizations have migrated to the cloud in the past few years due to growing advocacy for this technology by business users.

Cloud ROI is difficult to understand and measure for even the most experienced business managers. Below are some tips to aid you in maximizing the ROI of your cloud deployment.

 1. Maximize Use of Cloud Resources

Enterprises should use historical data to predict future use of cloud resources and buy resources accordingly. If resources are being optimally utilized, there is no unused capacity that needs to be reallocated to a different task. Cloud deployment allows flexible scaling. Therefore, should the need arise increasing capacity is not a problem. Utilizing cloud resources to the fullest is the key to increasing ROI.

2. Minimize Security Risks

Many enterprises are exposed to security risks like data theft and loss of valuable information. To maximize ROI, it is necessary to employ adequate security measures. Most cloud service providers already incorporate this feature into their services, but enterprises should ensure that there are no security risks due to erroneous configuration or inaccurate use of a resource.

3. Deploy Cloud Applications with a Wide Footprint

To maximize the ROI of cloud deployments, companies should deploy applications that have a wide footprint. The footprint of an application means the amount of processes it will automate or take over completely. For example, if an organization uses a cloud application to act as an add-on to its HR department, it will still require a considerable staff to run day-to-day operations in that department. However, a similar application having a wide footprint will automate a multitude of tasks. This means that staff can be redeployed and utilized to fulfil other needs of the department. This will increase efficiency and save capital, which in turn maximizes ROI.

4. Minimize Hidden Costs>

When an enterprise purchases services from a cloud service provider, there are many limitations on the use of applications and APIs that are being provided. These limitations might not seem like a threat when the system is being deployed, but as the operation grows, these might prove very costly for the users. Business managers should try to negotiate to get maximum use of applications and APIs in the contract even if it is not required at the current time. The high scalability of the cloud model is not very useful if there are strict limitations on use of resources.

5. Convert CAPEX to OPEX:

Business managers should try to convert the maximum amount of CAPEX (capital expenditure) to OPEX (operational expenditure). This is achieved automatically to some extent because the cost of purchasing servers, workstations and licences is eliminated. However, managers can maximize this conversion by signing a win-win deal with the service provider to provide other value added services like full scale IT support and maintenance contracts. This can also help in launching new services at a very reasonable cost.

6. Accurate ROI Analysis:

Cloud is a much-discussed phenomenon in business circles and amongst IT managers. However, there are actually very few people that fully understand it. Executives should try their best to examine each and every aspect of Cloud before performing ROI analysis for an enterprise or a project. An accurate calculation of ROI can help organizations take the rights steps for maximizing ROI.

7. Ensuring Easy Cloud Adaptability

Cloud computing has many tangible and intangible benefits that can be reaped by well-planned enterprise deployment. An important aspect to ensure successful deployment is promoting the use of cloud amongst all employees so they gain confidence and quickly adapt to this new technology. This will help management get rid of legacy systems as soon as possible without much resistance from staff. There is a direct correlation to an increase in cloud ROI and its widespread use in an organization.

Achieving the maximum ROI for any technology is of chief importance to any business. Cloud deployment not only enhances the efficiency and productivity in the workplace, but also allows scalability and makes future ventures more profitable. Business managers and IT administrators should make every effort to ensure the tips listed above are implemented to achieve maximum return on their investment. A well-planned cloud deployment is bound to save cost in operational expenditures as well as in capital required for future expansions.
For More Information about Cloud Deployment, visit this link:  Silicus Technologies

Labels: , , , ,

Thursday, August 09, 2018

For Supply Chain Management.... IT Infrastructure Is Critical

You are the weakest link - goodbye.

No business is an island, and companies working in fast-moving supply chains are expected to operate in a more joined-up way than ever before. Information is increasingly their lifeblood: modern supply chains are no longer simply about transforming raw materials into finished goods, but about sending information as quickly as possible the length and breadth of the chain. This information controls the delivery of materials, the size and timing of production runs, the particular geography in which production will occur and every detail of the distribution and delivery of goods. It is increasingly used to tune the chain to customers' real-time requirements, so that they get what they want, when and where they want it.

"The pressures now placed on any business which works within a supply chain are immense," explains Gill Hawkins, Marketing Director at Star. "In particular, large multinational companies have more and more power over those who supply them with product." Such companies play an orchestrating role within their supply chains. They are investing in IT infrastructures which facilitate the flow of information up and down the chain. "As a result," Hawkins adds, "they are enforcing increasingly high levels of IT connectivity on the people who do business with them."

The Internet is key to such connectivity. It is transforming the way in which many supply chain processes, such as purchase-ordering, are carried out. Large companies are spearheading on-line procurement initiatives, setting up on-line auctions and e-marketplaces, with which they expect suppliers to connect. They use e-mail, which is fast becoming the standard method of communication among supply chain partners. They are Web-enabling their core business systems, so that their information is available 24/7 to suppliers and customers - and they expect their suppliers to do the same. The expectations which they have of their partners' connectivity are rising daily.

The goal of today's supply chain may be the seamless, end-to-end electronic transfer of information over the Internet, but it is not yet the reality. There are numerous supply chain members without the right level of connectivity between their systems and those of their partners. Most companies will have experienced the frustration of having a supplier with a slow, unreliable e-mail system which throws a spanner into the works of their own stock-ordering process or a distributor with an inefficient, off-line logistics system which is unable to inform its customers of delivery delays.

Then, there are the partners with connectivity, but a cavalier attitude to Internet security - risking compromising the integrity and confidentiality of supply-chain information and risking bringing down their customers and suppliers' systems. Since information is so critical to today's supply chains, any company which lacks commerce-enabled business processes, supported by good Internet connections, efficient IT systems and the right attitude to security, may find itself sidelined from them.

"Most companies will have experienced the frustration of having a supplier with a slow, unreliable e-mail system or a distributor with an inefficient, off-line logistics system."

"Smaller businesses are especially vulnerable," Hawkins points out. Their significant customers are unlikely to wait for the small companies' IT infrastructure to catch up. In a global market, large companies can always find new partners which have equipped themselves with the right level of connectivity to play. If companies are to survive in the Internet enabled supply chains of the twenty-first century, they require the right IT solutions. "If businesses aren't careful, they will find themselves making the wrong decisions on IT investment, excluding them from supply chain opportunities," Hawkins remarks. "Getting the right advice when setting up and maintaining IT infrastructure is a business-critical issue," Gill adds.

"Making snap decisions internally about what software, hardware and Internet services to use is a high-risk game." "You know you've got the right IT partner," says Gill, "if it asks about your business, who your customers are and what those customers expect from you, in terms of communications technology. It should understand your business aspirations before suggesting an IT solution. That's when you can tell whether it wants you as a long-term partner, rather than a short-term revenue win."

A company may fulfil all of its customers' connectivity requirements, but still be perceived as the weakest link in the chain, if it doesn't carry its suppliers with it - helping them to adopt best practice, too. The hard fact is that if a company fails to invest in the right connectivity, it loses opportunities for not only itself, but also its suppliers and customers. No company wants to be viewed as the weakest link by its business partners, because, in today's supply chain, it can mean commercial suicide.

For assistance in finding just the right network architecture and bandwidth solution for your supply chain management application(s)....comparing multiple providers available in your specific area....we highly recommend the no cost consulting services from:

"Supply Chain Management Bandwidth Solutions"

Labels: , ,

Monday, August 06, 2018

Key Cloud Migration Considerations

The business case has been made and you've appointed your project resources for cloud migration. It's now time to scope and plan your migration. Moving your Enterprise IT workloads to the public cloud is a big decision and immediately alters the way you operate your business. It has to be approached strategically and shouldn't to be taken lightly. There are many benefits to cloud IT, but you must carefully deliberate and plan. The wrong decision is going to cost you in more ways than you care to calculate.

Many thoughts must have cluttered your mind such as, which of the cloud service providers best meets your needs? How would you calculate the cost of cloud migration and operation? How can you ensure service continuity during and after the move? What kind of security measures should you take and what do you need to prepare for? How can you ascertain regulatory compliance? There are many more questions that you should answer prior to migrating to the cloud.

In this article, we will discuss few of the most pressing issues to consider when planning the move.

Private, public or hybrid?

One of the first things to decide when migrating to cloud is whether you will go private, public or hybrid.

On a private cloud, you will have a dedicated infrastructure for your business, managed either by your teams or third-party providers. Your organization will have its own dedicated hardware, running on your private network, and located on or off premises.

A public cloud provides its services over a network that is not your private one and it is available for others to use. Usually it is off-site and provides a pay-per-usage billing model that could result in a cheaper solution, once it efficiently shares resources over the various customers.

Hybrid cloud combines your private or traditional information technology (IT) with a public cloud. Usually it is used to scale up and down your infrastructure systems to meet demand needs for seasonal businesses, spikes or financial closings, or to handle the application apart from the data storage, such as setting up the application layer in a public environment (for example a software as a service) while storing sensitive information in a private one.

Current infrastructure utilization

This is definitely one of the things you want to evaluate when considering a move to cloud. In traditional IT, businesses usually purchase their hardware based on utilization spikes in order to avoid issues when these scenarios occur. By doing that, organizations may end up with underutilized equipment, which could result in a huge waste of money. Taking a look at your performance and capacity reports can help you address these workloads on cloud and decide whether to release unused capacity for other workloads or simply move them over and avoid new investments.

Cloud Workload Analysis

Out of your IT workloads running in your datacenter, some may not be appropriate for migrating to the cloud. It isn't always easy to generalize the criteria for selecting the right applications for migration, but you need to consider all aspects of the execution environment. Given the service parameters promised by the provider, can you achieve the same level of capacity, performance, utilization, security, and availability? Can you do better? Can you afford less?

Your future growth must be factored into the decision. Can the cloud infrastructure scale as your resource consumption grows? Will your application be compliant with regulatory rules when hosted in the public cloud? How does the cloud infrastructure address compliance, if at all?

In order to make the right decision, you should thoroughly understand your current workloads and determine how closely their requirements, both for present and future evolution, can be satisfied.

Application Migration approaches

There are multiple degrees of changes you may want to do to your application depending on your short term and long term business/technical goals.

Virtualization - This model facilitates a quick and easy migration to cloud as no changes will be required to the application. Ideal candidate for legacy applications.

Application Migration - In this case your application will go through minimal architecture and design changes in order to make it optimal for a cloud model of deployment. For example, you may choose to use a No SQL database available on cloud.

Application Refactoring - This model will require a major overhaul of your application right from the architecture. This is typically done when you want to leverage the latest technology stack.

Backup policies and disaster recovery

How are your backup policies running today? Do they fit with your cloud provider? This is also an important point that organizations have to carefully consider. Cloud providers can have standard backup policies with some level of customization. It is worth it to have a look at those and see if they are suitable for your company before they become a potential roadblock. You'll want to pay attention to retention frequency, backup type (such as full, incremental and so on) and versioning.

Disaster recovery and business continuity are important even for the smallest companies. Recovery time objective (RTO) and recovery point objective (RPO) are important values that define how much data you are willing to lose and what amount of time you are willing to allow for the data to be restored.

Licensing

Is the application licensed per VM, per core, or for total infrastructure footprint? This can have massive cost implications. If the licensing model requires that all available resources be taken into account even if not allocated to the client, licensing costs will increase if migrated to a public-cloud platform. Similarly, if the application licensing is based per core and the cloud provider does not offer the ability to configure your cloud environment per core, this will have an adverse impact on your licensing cost.

Integration

Organizations often discover application dependencies too late in the process of migrating workloads, resulting in unplanned outages and limited functionality to systems while these dependencies are addressed. Understanding the relationships between applications is critical to planning the sequence and manner in which cloud migrations occur. Can the application exist on the cloud in isolation while other systems are migrated?

Compatible operational system

Clouds are all about standards, and you need to keep versions of your operating systems and middleware up to date when you aim to migrate them to a cloud provider. You need to take into consideration that cloud service providers (CSPs) do not support end-of-life operating systems or those that are being phased out. The same likely applies to your middleware and databases.

Hopefully this post will help you make decisions about your cloud migration.

Labels: , , , , , ,

Friday, August 03, 2018

Questions to Ask Your Potential Cloud Service Provider

Seemingly everybody is talking about cloud solutions, from small businesses to large Enterprises. It's not hard to see why - the benefits over on-site deployments are numerous - rapid deployment, potentially lower costs of ownership, and reduced maintenance and administration, to name but three.

For IT companies and Managed Service Providers (MSP's) offering solutions to their clients, the cloud equals opportunity. Unsurprisingly, rather than investing the considerable time and effort required to develop their own cloud solutions from scratch, the majority of smaller IT solution providers instead partner with cloud service vendors to provide their clients with services ranging from CRM to backup.

But one of the benefits of cloud services - rapid deployment - can also lead some IT companies to look at partnerships with cloud vendors with rose tinted glasses. If things go wrong with the cloud service, the first complaints won't come into cloud vendors - they'll come into the IT solution providers selling those services. For this reason alone, it's important for IT Solution Providers to take a step back and ask potential cloud partners "What happens when things go wrong? And is it really the best solution for your business?

Below are a few questions that you should ask your potential cloud solution provider:

Does the cloud fit our current business needs? - 

It is true that, for many businesses, the cloud is the way to go. Gartner, Inc., the world's leading information technology research and advisory company, has said that by 2020, a corporate "no-cloud" policy will be as rare as a "no-Internet" policy is today. This is the kind of hype that makes it seem like everyone who matters is already using the cloud, and those companies who have remaining physical infrastructure will be left in the dust. But that may not always be the case. Cloud migration doesn't make sense in all scenarios.

Security and Availability - 

For one, moving systems to the cloud may complicate security measures and/or unique regulatory compliance considerations. In some cases, (i.e. HIPAA, instances of national security, etc.) extreme information security is necessary and having direct control of an on-site system is critical.

Learn about how they deal with and monitor security issues, install patches and perform maintenance updates. Does it match your company's expected level of security or service? Ask where they host data and if it's a shared or a dedicated environment, and find out how many servers they have and if those servers are set in a cluster. It's also critical to know if the infrastructure is mirrored and 100 percent redundant. While you're at it, investigate their disaster recovery processes and determine if they operate out of a Tier 1 or Tier 4 data center.

Integration - 

This is a deal breaker. Be sure to ask how their solution integrates with your current IT environment and other solutions. What's their track record and game plan when it comes to integrating with other, on-premise solutions you already have installed? If halfway down the road they realize it does not integrate, what is their contingency plan and what kind of guarantees are they willing to offer?

Uptime Metrics and Reports - 

Find out how your vendor measures uptime and how that's communicated to clients, such as what part of the hosting infrastructure (hosting, server reliability, service delivery, etc.) the uptime calculation takes into account. Ask about processes in place for handling major outages: do they have a SWOT team in place, how do they typically communicate with the client (phone, email, RSS Feed, Twitter, SMS), and at what speed and with what level of details. Determine if they are proactive or proactively reactive when a problem occurs.

Are applications essential to your business operations cloud compatible? - 

Some applications may not run as well in the cloud, as Internet bandwidth issues may impede performance. It isn't enough to have a high-performance hosted application server if your Internet bandwidth limitations will deliver a bad user experience.

Another consideration to keep in mind is application portability. Although it is often easy to migrate an application server to the cloud, the application might have external dependencies that complicate the move.

Finally, older applications that run on legacy operating systems may not have cloud-friendly functionality. Before initiating a transition to a virtual infrastructure, it's essential for you to check in with your MSP partner about each application's cloud compatibility, as they should do rigorous lab testing to identify issues in advance of a move.

Assess the Vendor's Sales Process - 

Does the rep take the time to understand your company's needs or is he or she just selling for sales' sake? If the rep spends time to assess your business requirements, it's likely that same attitude permeates the entire company. Industry studies show that many applications sold out of the box fail to meet the customer's requirements because they're not customized to the client's needs. Make sure that the vendor pays attention to what you need and not just what they want to sell. Finally, after-sale support can tell you a lot about the seriousness, professional nature and quality of the internal processes of an organization.

How does a move to the cloud fit into our existing IT roadmap? - 

Technology is the backbone of modern business. That said, your IT roadmap should complement your business goals. Cloud infrastructure allows the right systems to be quickly and efficiently implemented across the business. Whether you're looking to expand your client base, attract top talent, or all of the above, using technology that boosts your business's capabilities can be a huge asset.

How is Pricing Set Up? - 

Obviously, pricing is an important question to ask. You'll want to learn about the vendor's billing and pricing structure. Most set up billing as a recurring, monthly item, but it's always good to do your homework. Are you being asked to sign a contract, or does your deal automatically renew, as with an evergreen agreement? If the vendor's price is unusually low compared to others, it should raise a red flag. Find out why. Can you cancel at any time without hidden fees? Do you have a minimum of users required in order to get the most attractive price?

By thoroughly covering this ground, you're most likely to find not only the right cloud vendor, but also the best solutions for your company and your clients.

Labels: , , , , , ,