Social Icons

Featured Posts

Wednesday, October 1, 2014

UK government launches review of the sharing economy

The government has launched a review of the “sharing economy” to evaluate the economic potential and social implications created by people sharing products and services through the web.

The review is intended to look at the impact of services such as Airbnb, which allows people to rent out homes or rooms, or BlaBlaCar, used for car journey sharing.

141403_cs0321.jpg

In his speech to the Conservative Party conference, chancellor of the exchequer George Osborne highlighted the need to understand the potential effect of disruptive technologies.

"Every single day new technologies, new companies and new economies are fundamentally shaking up the established way of doing things," he said.

"It’s never been easier for thousands to start their own business in Britain, and reach the whole world. But a single app can appear overnight and disrupt an entire industry. 

"It can be exciting – but unsettling too. For this technology brings intense competition that spells rapid decline for any sector, or any country, that fails to keep up. These are big questions that require big answers."

The consultation has been initiated by the Department for Business, Innovation and Skills and will be led by Debbie Wosskow, CEO of online home-swapping service Love Home Swap.

“Over the next few months I will be exploring the social and economic potential of the sharing economy in the UK, and making recommendations on how this potential can be reached. I will also be considering any risks to consumers, or established businesses outside the sharing economy,” said Wosskow.

“I am keen to hear a wide range of views to feed into my review, including from users and potential users of sharing economy services, businesses operating in the sharing economy, and established businesses that are not part of the sharing economy,” she said.

The consultation will look at existing services such as home or business rentals, transport sharing and personal-time sharing, as well as emerging areas including fashion, food and personal items.

The terms of reference for the review said: “Collaborative businesses such as Airbnb and TaskRabbit are growing the sharing economy – peer-to-peer marketplaces that allow people to share possessions, time and skills. These new and varied business models are attracting significant publicity and investment across a wide range of sectors.”

Wosskow is calling for evidence to be submitted to the consultation before 28 October 2014, and will report on her findings by December.

The sharing economy has already caused controversy in some countries. In the US, hotel firms have complained that Airbnb allows people to avoid paying taxes on renting properties, as well as circumventing the rules and regulations to which they are forced to adhere. Taxi hailing app Uber has led to protests by taxi drivers in London and elsewhere about unlicensed cab drivers unfairly competing with regulated providers.

The UK government hopes that a better understanding of the implications of the sharing economy will help to create an environment that is more attractive for companies launching such services to base themselves in the UK.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy$("a#eproductLogin").attr('href', function(i) { return $(this).attr('href') + '?fromURL=' + regFromUrl; });

View the original article here

John Lewis invests £100,000 in micro-location startup

Retailer John Lewis is investing £100,000 in technology startup Localz, which specialises in micro-location technology.

The retailer launched its JLab technology startup incubator earlier this year to find new ways to help customers shop across channels, simplify their lives using the internet of things and to use data for in-store personalisation.

Paul Coby.jpg

In May, it selected five startups to move into the JLab office space in Level39, each of which received initial funding of £12,500 and mentoring from John Lewis, Risk Capital Partners, the founder of confused.com and Silicon Valley Bank. 

After developing their ideas for 12 weeks, the competition came to a head for the startups when they had to pitch their ideas last week.

Of the five finalists, John Lewis chose to trial technology from Localz, which sends promotions to customers' mobile phones according to the part of the store they are in.

"Innovation is at the heart of John Lewis, and JLab, our first tech incubator, has given us a new way to explore the technologies that will change how we all shop in the future," said John Lewis IT director Paul Coby (pictured). 

Localz: in-store digital engagement using proximity and iBeacon technologyMusaic: wireless sound system for smart homesSpaceDesigned: online 3D room planningTap2Connect: smart labelling for after-sales careViewsy: in-store digital engagement using sensors to track customer behaviour

Localz plans to invest its winnings in its UK operations and continued technology development. The company is also looking to hire talent to work with its London-based team.

The technology it has created could allow customers to receive specialised offers on their smartphone as they visit different retail departments or help them find their way around the store.

JLab partner Stuart Marks praised the quality of entries and said picking a winner proved difficult. 

"I am sure all the companies will go on to become very successful, but there has to be a winner, and in this case we felt Localz has the potential to become a long-term partner to John Lewis and to provide continuous innovation for its customers," he said.

John Lewis has being eyeing up technology startups for some time to keep ahead of the game in the changing landscape of the UK high street.

The retailer works with a wide range of suppliers, from large traditional firms to early-stage startups. It likes to work with startups because they are fresh thinking and have the speed to get things done quickly as you deal with fewer people than larger suppliers. 


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy$("a#eproductLogin").attr('href', function(i) { return $(this).attr('href') + '?fromURL=' + regFromUrl; });

View the original article here

Tuesday, September 30, 2014

BT launches cloud voice service for businesses

BT Business has expanded its communications offering for small and medium-sized enterprises (SMEs) with the launch of a cloud-based, business-grade IP voice service, BT Cloud Voice.

The operator said its new service would deliver all the traditional office phone system call features and quality, delivered via a BT Business internet connection, to provide a more flexible and future-proofed offering.

BT_Sevenoaks_290x230.jpg

The system includes features such as intelligent call handling, conferencing, recording, desktop-sharing, softphones and smartphone integration.

The service will have three licence options: Basic, the entry-level functional service; Connect, for office-based firms with more demands than simply calling; and Collaborate, for firms with mobile or home workers that want to be able to use features such as audio-conferencing.

Users will receive an IP phone and a BT call plan designed to be shared among multiple users, with minutes to be purchased at the company level. All calls made using the system will run over BT Business’ network, said BT.

Buyers will be able to manage their service through an online portal, allowing companies to tailor and manage their own requirements, and perform a number of self-service functions, including licence management, adding and removing users, and setting call preferences without the need for BT to dispatch engineers.

BT said that, because the services are hosted, this should bring further savings for users by eliminating the need for elaborate maintenance contracts.

Graham Sutherland, BT Business CEO, said that 60% of SMEs in the UK were already using cloud-based applications to some degree, so the introduction of a cloud-based telephony offering could be seen as a natural step for a lot of the firm’s customers.

“BT Cloud Voice is a highly reliable and flexible business communications system and future-proofed solution for SMEs,” he said. “There are no initial hardware costs or engineer visits, and calling plans can be easily shared across the business.

“Our customers expect great value and high-quality products and BT Cloud Voice delivers on both counts,” said Sutherland.

In August, BT Business launched a range of plans aimed at SMEs including free 4G access and unlimited Wi-Fi.

Its latest product launch comes hot on the heels of a number of new communications offerings pitched at the smaller end of the market, and a government drive to encourage small businesses to apply for grants to upgrade their broadband.

Last week TalkTalk Business launched an SME-focused business broadband package that it claimed could save users close to £1,000 when compared to some equivalent BT services.

Virgin Media Business also unveiled a service for small businesses, saying that UK businesses were at risk of losing out to international competition if they scrimped on their communications budget, and courted entrepreneurs by running a fleet of free taxis around major cities and floating its CEO, Peter Kelly, down the Thames in a black cab.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy$("a#eproductLogin").attr('href', function(i) { return $(this).attr('href') + '?fromURL=' + regFromUrl; });

View the original article here

Microsoft vies with VMware in the virtual machine market

VMware launched live migration in 2003. Since then, its capabilities have seen many enhancements, but Microsoft is starting to catch up.

Live migration entails moving active virtual machines (VMs) between physical hosts with no service interruption or downtime.

140224_cs0223.jpg

It launched 11 years ago as a landmark development in datacentre infrastructure and is now a crucial part of virtualisation infrastructure software and deployment.

A VM live migration allows administrators to perform maintenance and resolve a problem on a host without affecting users.

Moving active VMs from one hypervisor to another means you can balance the performance and load of hypervisors or, in the case of hardware maintenance, evacuate hypervisors from active VMs. It enables users to conserve resources during non-peak hours by moving VMs to fewer servers. You can also optimise network throughput by running VMs on the same hypervisor. When live migration of VMs appeared in 2003, with VMware’s ESX 2.0, it became popular in the IT community.

Six years after VMware pioneered VM live migration in 2003, Microsoft introduced a similar feature in Hyper-V that was shipped with Windows Server 2008 R2 – the previous version, Quick Migration, in Windows Server 2008 required a short service interruption during migration.

To understand how live migration works, it is important to be aware of the VM’s basic components: storage (the virtual hard disk) and the configuration or state. Storage is often located on a storage area network (SAN) and its configuration runs in a host server’s processor and memory. With the traditional process of a live migration, the VM’s state and configuration is copied from one physical host to another, but the VM’s storage does not move.

Shared nothing live migration is a combination of traditional VM live migration and storage migration

Storage live migration – moving the disks of a VM from one location to another while the VM continues to run on the same physical host – became available in 2006/2007 with ESX 3.0/3.5. VMware’s current offering, vSphere 5.5, vMotion (live migration of VMs) and Storage vMotion (live migration of the virtual disks) are part of the vSphere standard edition. Automatic load-balancing of VMs (distributed resource scheduling, or DRS) is available with vSphere Enterprise, automatic load balancing of disks (Storage DRS) and vSphere Enterprise Plus. Leveraging vMotion requires that ESXi servers are being managed by Virtual Center and that they are compatible (boiling down to compatible CPUs and a couple of minor requirements) with the same physical subnet.

Moving VMs between hypervisors that are not on the same physical network segment is not supported. Administrators need to tag an existing port or create a new VMkernel port for vMotion usage and live migration to be used by with one click in the vSphere Client (either the traditional client or the web client). Using the web client, even live migration of  VMs without shared storage is possible (shared-nothing live migration, introduced with vSphere 5.1).

Shared-nothing live migration is a combination of traditional VM live migration and storage migration. The VM’s state and configuration is copied to a destination host and the file system is moved to the destination storage device. To prevent downtime, the VM’s state and storage remain running on the original host and storage location until the copying process is completed.

VMware has improved its live migration capabilities over the years and the application can now leverage multiple network interfaces to speed up live migration. In VMware’s upcoming vSphere 6, rumoured to be launching in March 2015, live migration over longer distances – with higher latencies and between virtual centre instances – is expected to be available.

DRS, which leverages vMotion to balance VM workload between physical hosts, has also been improved in recent product versions. It now boasts rules that take preferences into account and can evacuate hypervisors during non-peak hours to conserve resources using distributed power management (DPM), available with DRS as part of vSphere Enterprise Edition. VMware also updated Storage vMotion in vSphere version 5.0 by moving from a dirty block tracking algorithm to I/O mirroring, improving the performance and reliability of its storage live migration capabilities.

Microsoft introduced the ability to move VMs across Hyper-V hosts with Windows Server 2008 R2. This required VMs to reside on shared storage as part of a cluster. Even then, Hyper-V wasn’t able to move multiple machines simultaneously. However, with Windows Server 2012 and Server 2012 R2, Microsoft continued to gain ground on VMware, introducing additional migration capabilities that put Microsoft more or less on par with VMware when looking at this specific feature.

Since Windows Server 2012 R2 Hyper-V can store VMs on server message block (SMB) file shares, performing live migration on running VMs stored on a central SMB share is now possible between non-clustered and clustered servers, so users can benefit from live migration capabilities without investing in clustering infrastructure. Windows Server 2012 R2’s live migration can also leverage compression, reducing the time needed to perform live migration by 50%, according to Microsoft.

Live migration in Windows Server 2012 R2 can use improvements in the SMB 3.0 protocol too, which accelerate live migration without the VM having to be stored on a SMB 3.0 share. If the customer is using network interfaces that support remote direct memory access (RDMA), the flow of live migration traffic is faster and has less impact on the CPUs of the hosts involved.

Storage live migration was introduced to the Hyper-V feature set with Windows Server 2012. Windows Server 2008 R2 allowed users to move a running VM using traditional live migration, but you had to shut down a VM to move its storage in Windows Server 2008 R2. With the current version of Hyper-V, you can transfer a VM’s backing storage files to a new location with no downtime, a feature that is critical for migrating or updating storage, or when a load redistribution on the storage side is needed.

In their current versions, VMware’s vSphere 5.5 and Microsoft Windows Server 2012 R2 Hyper-V support shared-nothing live migration, which makes it possible to simultaneously change the location where the VM is being run as well as the backing storage location for the running VM – a feature that provides additional flexibility, especially in small business environments where centralised storage is not always present.

Microsoft has gained substantial ground in many areas, but experts agree there is still a gap between Hyper-V and VMware vSphere when looking at enterprise-level features. Hyper-V lacks features, such as vSphere Storage DRS, though other features, such as Storage Spaces, offer similar functionalities. But Hyper-V comes in a powerful free version, “Hyper-V Server 2012”, which includes native support for Live Migration of VMs across clustered and non clustered hosts at no extra cost, while VMware’s free hypervisor has limited functionality. Going beyond live migration, both suppliers support replication capabilities, which is easy to set up with VMware vSphere and Microsoft’s Hyper-V. Combined with the cloud offerings of the suppliers, VMware vCloud Air Disaster Recovery and Microsoft Azure Site Recovery, users can replicate and failover VMs to their suppliers’ cloud offerings, giving extra options for self-service disaster recovery protection and business continuity.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy$("a#eproductLogin").attr('href', function(i) { return $(this).attr('href') + '?fromURL=' + regFromUrl; });

This was first published in September 2014


View the original article here

Apple releases Mac OS X patches for Shellshock Bash bug

Apple has released security updates for its Mac OS X operating system to protect users from the newly reported Shellshock Bash bug affecting all Unix-based computers.  

The release comes just days after Apple confirmed that Mac OS X, which is derived from Unix, was vulnerable to the bug, although the company claimed anyone using default Mac settings should be safe.

42103_security.jpg

According to Apple, only users who configured advanced Unix services were at risk, but the company did not name any of the services involved.

Some users resorted to technical workarounds, but now Apple has published automatic updates for the latest versions of OS X.

Patches are available through Software Update for OS X Mavericks, Mountain Lion and Lion.

Security experts have warned that the bug in the Bash command prompt software used in OS X and up to 500 million Unix-based computers is being actively exploited.

Researches at security firm FireEye have observed a “significant amount of overtly malicious traffic” using Bash.

This malicious traffic includes malware droppers, reverse shells and backdoors, data exfiltration, and distributed denial of service (DDoS) attacks.

The researchers think it is only a matter of time before attackers exploit the vulnerability to redirect users to malicious hosts, which can result in further compromise.

Attackers have deployed scanners looking for vulnerable machines that have been bombarding networks with traffic since the 25-year-old bug was made public on 24 September.

The Shellshock bug is widely regarded as a bigger threat than the Heartbleed OpenSSL bug because it affects a thousand times more computers and is easily exploited to enable attackers to take full control of the target computer.

The US and UK Computer Emergency Response teams were quick to issue warnings about the Shellshock bug, and urged affected organisations to install software security updates immediately.

The Information Commissioner’s Office (ICO) has also urged organisations and individuals to make sure their IT systems are up to date.

“This flaw could be allowing criminals to access personal data held on computers or other devices. For businesses, that should be ringing real alarm bells, because they have legal obligations to keep personal information secure,” an ICO spokesperson said.

The biggest threat is to the enterprise because many web servers are run using the Apache system, software which includes the Bash component.

But, while most of the main Linux distributions have rushed to release updates, security experts have raised concerns about Unix-based embedded systems in internet of things (IoT) devices and legacy systems used by many critical national infrastructure suppliers.

Security researchers have warned that, while home users and traditional servers may be able to patch their way out of danger, this solution is not available for many embedded devices and Unix-based industrial control systems.

This also applies to supervisory control and data acquisition (Scada) systems commonly used by critical national infrastructure.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy$("a#eproductLogin").attr('href', function(i) { return $(this).attr('href') + '?fromURL=' + regFromUrl; });

View the original article here

A guide to scale-out NAS: The specialists and startups

In our recent feature on scale-out NAS, we looked at products from the six biggest storage suppliers.

Here we examine the smaller scale-out NAS suppliers. Some offer systems for small and medium-sized enterprises (SMEs), but others are tailored for specialised applications such as high-performance computing (HPC) and virtualisation. All include high-availability features such as redundant drives and other components, along with a range of enterprise-level features such as data deduplication and storage tiering.

Gridstore.jpg

Irrespective of organisation size, scale-out NAS makes sense in a world where data volumes are increasing at unpredictable rates, and where paying upfront for large volumes of storage is increasingly viewed as uneconomical. Not only does that tie up capital, it tends to create silos of storage, which in turn increases the amount of data management required – something few businesses, especially those without an extensive IT department, undertake willingly.

Instead, scale-out architectures allow an organisation to buy what it needs when it needs it, so spreading the financial burden. As a result, research firm ESG predicts that by 2015, scale-out NAS will comprise 80% of the NAS market by revenue and 75% by capacity. Like most fields of human activity, there are trends and fashions in storage buying, and the time for scale-out NAS is now.

DataDirect Networks (DDN) aims its products at sectors that create large volumes of data, such as energy, life sciences, cloud and web, and financial services. 

The company offers two routes to scale-out NAS: ExaScaler, which is aimed at HPC applications and runs the parallel, open-source Lustre File System; and GridScaler for enterprises, which runs IBM’s General Parallel File System (GPFS).

GridScaler scales performance by adding file-serving nodes and grows capacity by adding storage appliances, to up to 10,000 NAS clients. It supports policy-based data tiering while retaining a single namespace. Up to 200 servers can be included in a single cluster, with throughput for a single Linux client of up to 4Gbps over Infiniband and 700Mbps over 10GbE.

ExaScaler can support up to 20,000 clients and up to 400 gateway nodes, with throughput for a single Linux client of up to 3.5Gbps over Infiniband and 700Mbps over 10GbE.

Other than that, the differences are mainly in connectivity. GridScaler offers client access over a common internet file system (CIFS) and network file system (NFS), while ExaScaler uses 10GbE and remote direct memory access (RDMA)-enabled Infiniband. Both systems use the same storage appliances at the back end.

Storage configurations start with a single SFA12K appliance, which can scale up to 1,680 SATA, SAS and flash drives with a maximum capacity of 10PB when using 20 enclosures in two 48U racks.

The SFA12K range consists of three appliances: The 12K-20 and 12K-40, which connect to the server over 10GbE or Infiniband,; and the server-less 12K-20E, which packages the server into the box, so reducing server-to-storage latency. Features include advanced data protection features, such as automated storage tiering, snapshots, mirrored volumes and asynchronous replication.

Gridstore offers its systems (pictured) as storage for Hyper-V environments. It aims to reduce the problem of high volumes of random input/output (I/O) generated by hypervisors running multiple virtual machines (VMs), which most storage systems handle poorly.

Gridstore says its systems use virtualisation to re-establish a one-to-one relationship between a VM and its underlying storage, and to manage storage functionality on a per-VM basis rather than per logical unit number (LUN).

Like most fields of human activity, there are trends and fashions in storage buying, and the time for scale-out NAS is now

Capacities start at 4TB per 1U node with a three-node cluster, and can be expanded up to 48TB per node, allowing scalability up to 12PB as nodes are added. The storage systems use erasure coding. A write-back cache on a PCIe card with over 500GB of flash memory boosts throughput.

Gridstore systems are designed for two use cases. The H-Class caters for those that need high throughput, while the C-Class is aimed at those that need more capacity. All offer four 1GbE or two 10GbE ports as options. The GS-H2100-12 provides 12TB SATA and PCIe flash storage, while the capacity nodes – the GS-C2000-04 and GS-2100-12 – provide 4TB and 12TB respectively using SATA disks only, and connect using dual 1GbE ports as standard.

The software supports VM snapshots and live migration of VMs and their associated storage. Other features include VM replication, thin provisioning and data deduplication, plus the ability to prioritise traffic flows for each VM. Appliances are managed and controlled by a Gridstore vController VM.

Following Oracle's acquisition of Sun Microsystems in 2010, the company offers its ZFS Storage Appliances. The appliances use a combination of mechanical and flash storage, and DRAM and flash for caching to boost performance. They connect using 1GbE, 10GbE and, optionally, 8Gbps and 16Gbps Fibre Channel or Infiniband. Services include compression, data deduplication, cloning and replication. The systems can be accessed by clients using NFS, CIFS, HTTP, WebDav and FTP.

Storage consists of two controller appliances, the ZS3-2 and the ZS3-4, to which disk shelves can be connected. The ZS3-2 scales from 6TB to 1.5PB and allows up to 16 disk shelves to be attached, each with 20 or 24 disks per shelf. It comes with eight 10GbE ports as standard and a maximum port count of 32. The ZS3-4 allows up to 36 disks per shelf and so scales to 3.5PB, and includes eight 1GbE ports as standard and a maximum port count of 40.

Management tools include Dtrace Analytics, which provides fine-grained visibility into disk activity and usage. As you might expect, tools for integration with Oracle databases are also available, including Snap Management for database backup management, a database compression tool and Intelligent Storage Protocol, which provides metadata to help improve storage efficiency.

The SnapScale series consists of clustered NAS running the company's RainCloud operating system (OS) for storage clusters, and is aimed at medium-sized businesses. Capacity can be boosted by adding hard drives or nodes to the cluster, which offers a single namespace and support for file- and block-level access. Protocols include CIFS, NFS and HTTP over 1GbE or 10GbE per node. Features include replication, compression and encryption. Maximum capacity is 512PB.

Each 2U SnapScale X2 unit can house up to 12 SAS drives up to 4TB in size, providing up to 24TB per node with a minimum drive count of four, while the 4U X4 unit scales up to 72TB from its 36-drive maximum.

Within a cluster, files can be distributed and data stripped across nodes for improved throughput. The systems provide high availability through redundancy and will failover in the event of a drive or node failure.

Panasas targets ActivStor at energy, finance, government, life sciences, manufacturing, media and university research, and claims its system combines the benefits of flash performance and SATA economy.

ActivStor runs the company's PanFS parallel file system, and delivers linear scalability from its blade architecture via out-of-band metadata processing and parallel processing of its (triple parity) Raid6+ reads and writes. Maximum per-system throughput is 150Gbps. It uses a combination of director and storage blades to allow users to achieve the required balance between performance and capacity.

The ActivStor 14 scales to 8.12PB with a per-shelf capacity of 80TB SATA and 1.2TB flash drives, providing throughput of 1.6Gbps and 13,550 IOPS per shelf. At the top end, the ActivStor 16 scales to 12.24PB and provides a claimed system throughput of more than 1.3 million IOPS, or more than 13,550 IOPS per shelf. HDD capacity per shelf is 120TB SATA and 2.4TB SSD.

Connectivity is provided by two 10GbE or eight 1GbE ports per shelf over CIFS, NFS or Panasas DirectFlow, a parallel protocol that provides access for Linux clients.

The Q-Series is aimed at high-performance, big data workflows, such as healthcare and life sciences, science and engineering, and media organisations, with its 2U QXS range – one of two product lines within the series – aimed at scalability.

The QXS-1200 and QXS-2400 clusters deliver a maximum capacity of 384TB and 230.4TB from 96 and 192 drives respectively. Each QXS-1200 unit houses up to 12 7,200rpm 4TB NL-SAS drives and is designed to provide economical capacity, while the QXS-2400 provides higher performance from up to 24 10,000rpm 1.2TB SAS drives per unit. The systems connect over 16 16Gbps Fibre Channel ports.

Scalability is provided by 2U expansion units, up to seven of which can be attached per base system, and has the same capacities and drive unit types as the base system. Client systems supported include Windows, Mac and Linux.

The HC3 is aimed at virtualisation consolidation in medium-sized organisations with small IT departments. The systems consist of three collapsed server-and-storage systems that include a licensed hypervisor, and which scale from a single 6TB unit to an eight-unit cluster providing 28.8TB in a single namespace, managed from a single pane of glass. Guest operating systems officially supported by the hypervisor-in-a-box systems include RHEL/CentOS, SuSe Linux Enterprise and most recent versions of Windows.

The range starts with the HC1000, which offers a maximum of 8TB from four 2TB drives, accessible over two 1GbE or 10GbE ports. The HC2000 increases the number of CPU cores by 50% from four to six and provides up to 4.8TB from four 1.2TB 10,000rpm SAS drives – 15,000rpm 600GB drives can be specified as an alternative. Its port count is identical to that of the HC1000. 

Double the number of CPU cores are housed in the top-end HC4000, along with eight 10,000rpm 1.2TB drives for a maximum capacity of 9.6TB. It provides a pair of 10GbE ports only. Nodes of different sizes can be accommodated in a single cluster.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy$("a#eproductLogin").attr('href', function(i) { return $(this).attr('href') + '?fromURL=' + regFromUrl; });

This was first published in September 2014


View the original article here

HMRC promises personal online tax accounts in new digital strategy

HM Revenue & Customs (HMRC) has launched a digital strategy that will see the creation of personalised online tax accounts for taxpayers and businesses in the next four years.

The plan is HMRC’s contribution to the wider Whitehall drive towards “government as a platform”, which was today endorsed by the head of the civil service, Sir Jeremy Heywood.

HMRC-tax-290px.jpg

HMRC’s digital strategy, published on the Gov.UK website today, describes a roadmap for developing digital services to improve how taxpayers and businesses engage with the department. While few of the items listed have specific milestones or deadlines, the strategy outlines the key elements expected to be put in place by 2018.

Central to this are personalised digital tax accounts, based on a “multi-channel digital tax platform”. Personal tax accounts will allow users to “file, pay and make changes across all of their taxes, in a single place” based on real-time data. 

For example, taxpayers will be able to log in and see tax statements, details about tax codes, file tax returns and make payments online. It will also have a secure personal mailbox facility to allow HMRC to communicate electronically with taxpayers.

Individual tax accounts such as this have not been possible in the past because of the siloed nature of HMRC’s systems – in many cases, IT systems were built around particular taxes, such as PAYE or national insurance, rather than the individual taxpayer.

“All of our customers and the businesses we serve will have access to a personalised online tax account. They’ll be able to file, pay and make changes across all of their taxes, in a single place. Customers will increasingly be able to get what they need from us online,” said the HMRC strategy document published online.

“We will use the data and intelligence we have about our customers to present the personalised service. They will feel that we have a really good understanding of what is going on in their lives. When they deal with us it will be as though we are picking up a conversation where they left off – rather than starting all over again.”

To enable personal online services, work is underway on a multi-channel tax platform including the sort of customer relationship management (CRM) capabilities commonly used by banks and retailers to offer co-ordinated services to customers across several channels.

“We are building a new digital platform with a common infrastructure that links existing and new systems,” said the HMRC strategy document.

“This means we will be more consistent and responsive in the way we provide our services. The platform will be secure, reliable, flexible and scalable, allowing us to develop services quickly. We will be able to manage customer contact flexibly through a range of communication channels including phone, secure messaging and webchat.”

Neither of these new digital services has a specific deadline published for implementation, but the strategy document said it expects to see “active relationships develop with customers through digital tax accounts and assisted digital services”, during 2014 or 2015, along with the “first wave of digital services for individuals and businesses designed to help customers pay the right tax at the right time”.

The document said that by 2018 “dealing with HMRC through personalised, multi-channel digital services [will be] the norm for the majority of customers”.

The strategy said that the move to digital services requires new skills in the department, and a new “culture and mindset” involving “an increasing number of cross-functional service delivery teams”.

“Every customer in the UK will have their own personalised digital tax account, so we can help make it simpler, quicker and easier to pay the right tax at the right time. This will have big implications for our staff, as well as our customers, involving changes to the types of job we will be doing and the skills we will need,” said HMRC.

One of the key dependencies is developing in-house digital development skills – an area HMRC admits is currently limited. The department started a recruitment programme in January to find 50 staff for a new digital centre in Newcastle. The centre was formally opened in late July but HMRC is still advertising for a number of roles.

Most of HMRC’s IT is currently provided under one of Whitehall’s biggest outsourcing deals – the £800m-per-year Aspire contract, with a consortium led by Capgemini with Fujitsu as a key partner. 

In July, the National Audit Office (NAO) warned the department that it was taking too long to prepare for the end of the Aspire contract in June 2017. The NAO pointed to serious risks to HMRC’s business if it fails to replace the deal in line with government reforms that mandate moving away from large IT outsourcing arrangements.

Every year HMRC issues 245 million paper forms, sends 200 million outbound letters, receives 73 million customer support phone calls and 70 million items of post. The digital strategy aims to change the mix of interactions to be predominantly digital by 2018.

HMRC digital strategy

But the strategy also includes provision for “assisted digital” services for people who are unable or unwilling to transact with HMRC online.

“We know that not everyone is ready or able to use digital services. People have a range of needs so, for example, we’ll need to provide extra support and encouragement for those who need a bit of help, right through to offering different ways for people to get their information into our digital services for those who really can’t do it alone,” said the strategy.

Sir Jeremy Heywood, the head of the civil service, wrote in a blog post on Gov.UK this week that he sees “government as a platform” and digital services as a key driver of change in Whitehall.

“Things are changing in the civil service. The changes might be hard to see from outside – you won’t have heard about them on the news – but they are happening,” he said.

“Technology and the internet in particular, are the driving forces. Many in the world of business understood this and adapted to it years ago. The civil service lagged behind. Now we are changing that.”

The HMRC digital strategy is being led by Mark Dearnley, the former CIO of Vodafone, who joined as the organisation’s new chief digital and information officer in October last year.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy$("a#eproductLogin").attr('href', function(i) { return $(this).attr('href') + '?fromURL=' + regFromUrl; });

View the original article here

stats