Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-we-can-sell-dangerous-surveillance-systems-to-police-or-we-can-stand-up-for-whats-right-we-cant-do-both-says-a-protesting-amazon-employee
Natasha Mathur
18 Oct 2018
5 min read
Save for later

“We can sell dangerous surveillance systems to police or we can stand up for what’s right. We can’t do both,” says a protesting Amazon employee

Natasha Mathur
18 Oct 2018
5 min read
An Amazon employee has spoken out against Amazon selling its facial recognition technology, named, Rekognition to the police departments across the world, over a letter. The news of Amazon selling its facial recognition technology to the police first came out in May this year. Earlier this week, Jeff Bezos spoke at the WIRED25 Summit regarding the use of technology to help the Department of Defense, "we are going to continue to support the DoD, and I think we should, The last thing we'd ever want to do is stop the progress of new technologies, If big tech companies are going to turn their back on US Department of Defense, this country is going to be in trouble”. Soon after a letter got published yesterday, on Medium, by an anonymous Amazon employee, whose identity was verified offline by the Medium editorial team. It read, “A couple weeks ago, my co-workers delivered a letter to this effect, signed by over 450 employees, to Jeff Bezos and other executives. We know Bezos is aware of these concerns... he acknowledged that big tech’s products might be misused, even exploited, by autocrats. But rather than meaningfully explain how Amazon will act to prevent the bad uses of its own technology, Bezos suggested we wait for society’s immune response”. The letter also laid out the employee’s demands to kick off Palantir, the software firm powering ICE’s deportation and tracking program, from Amazon Web Services along with the need to initiate employee oversight for ethical decisions within the company. It also clearly states that their concern is not regarding the harm that can be caused by some company in the future. Instead, it is about the fact that Amazon is “designing, marketing, and selling a system for mass surveillance right now”. In fact, Rekognition is already being used by law enforcement with zero debate or restrictions on its use from Amazon. For instance, Orlando, Florida, has currently put Rekognition to test with live video feeds from surveillance cameras around the city. Rekognition is a deep-learning based service which is capable of storing and searching tens of millions of faces at a time.  It allows detection of objects, scenes, activities and inappropriate content. Amazon had also received criticism from the ACLU regarding selling rekognition to cops as it said that, “People should be free to walk down the street without being watched by the government. By automating mass surveillance, facial recognition systems like Rekognition threaten this freedom, posing a particular threat to communities already unjustly targeted in the current political climate. Once powerful surveillance systems like these are built and deployed, the harm will be extremely difficult to undo.” Amazon had been quick to defend at that time and said in a statement emailed to various news organizations that, “Our quality of life would be much worse today if we outlawed new technology because some people could choose to abuse the technology. Imagine if customers couldn’t buy a computer because it was possible to use that computer for illegal purposes? Like any of our AWS services, we require our customers to comply with the law and be responsible when using Amazon Rekognition.” The protest by Amazon employees is over the same concern as ACLU’s. Giving Rekognition in the hands of the government puts the privacy of the people at stake as people won’t be able to go about their lives without being constantly monitored by the government. “Companies like ours should not be in the business of facilitating authoritarian surveillance. Not now, not ever. But Rekognition supports just that by pulling dozens of facial IDs from a single frame of video and storing them for later use or instantly comparing them with databases of millions of pictures. We cannot profit from a subset of powerful customers at the expense of our communities; we cannot avert our eyes from the human cost of our business”, mentions the letter. The letter also points out that Rekognition is not accurate in its ability to identify people and is a “flawed technology” that is more likely to “misidentify people” with darker skin tone. For instance, Rekognition was earlier this year put to test with pictures of Congress members compared against a collection of mugshots. The result was 28 false matches with incorrect results being higher for people of color. This makes it irresponsible, unreliable and unethical of the government to use Rekognition. “We will not silently build technology to oppress and kill people, whether in our country or in others. Amazon talks a lot about values of leadership. If we want to lead, we need to make a choice between people and profits. We can sell dangerous surveillance systems to police or we can stand up for what’s right. We can’t do both”, reads the letter. For more information, check out the official letter by Amazon employees. Jeff Bezos: Amazon will continue to support U.S. Defense Department Amazon increases the minimum wage of all employees in the US and UK Amazon is the next target on EU’s antitrust hitlist
Read more
  • 0
  • 0
  • 16454

article-image-microsoft-start-ai-school-to-teach-machine-learning-and-artificial-intelligence
Amey Varangaonkar
25 Jun 2018
3 min read
Save for later

Microsoft start AI School to teach Machine Learning and Artificial Intelligence

Amey Varangaonkar
25 Jun 2018
3 min read
The race for cloud supremacy is getting interesting with every passing day. The three major competitors - Amazon, Google and Microsoft seem to be coming up with fresh and innovative ideas to attract customers, making them try and adopt their cloud offerings. The most recent dice was thrown by Google - when they announced their free Big Data and Machine Learning training courses for the Google Cloud Platform. These courses allowed the students to build intelligent models on the Google cloud using the cloud-powered resources. Microsoft have now followed suit with their own AI School - the promise of which is quite similar: Allowing professionals to build smart solutions for their businesses using the Microsoft AI platform on Azure. AI School: Offering custom learning paths to master Artificial Intelligence Everyone has a different style and pace of learning. Keeping this in mind, Microsoft have segregated their learning material into different levels - beginner, intermediate and advanced. This helps the intermediate and advanced learners pick up the relevant topics they want to skill up in, without having to compulsorily go through the basics - yet giving them the option to do so in case they’re interested. The topic coverage in the AI School is quite interesting as well - from introduction to deep learning and Artificial Intelligence to building custom conversational AI. In the process, the students will be using a myriad of tools such as Azure Cognitive Services and Microsoft Bot framework for pre-trained AI models, Azure Machine Learning for deep learning and machine learning capabilities as well as Visual Studio and Cognitive Toolkit. The students will have the option of working with their favourite programming language as well - from Java, C# and Node.js to Python and JavaScript. The end goal of this program, as Microsoft puts it perfectly, is to empower the developers to use the trending Artificial Intelligence capabilities within their existing applications to make them smarter and more intuitive. All this while leveraging the power of the Microsoft cloud. Google and Microsoft have stepped up, time for Amazon now? Although Amazon does provide training and certifications for Machine Learning and AI, they are yet to launch their own courses to encourage learners to learn these trending technologies from scratch, and adopt AWS to build their own intelligent models. Considering they dominate the cloud market with almost 2/3rds of the market share, this is quite surprising. Another interesting point to note here is that Microsoft and Google have both taken significant steps to contribute to open source and free learning. While Google-acquired Kaggle is a great platform to host machine learning competitions and thereby learn new, interesting things in the AI space, Microsoft’s recent acquisition of GitHub takes them in the similar direction of promoting the open source culture and sharing free knowledge. Is Amazon waiting for a similar acquisition before they take this step in promoting open source learning? We will have to wait and see.
Read more
  • 0
  • 0
  • 16445

article-image-wanna-be-rockstar-developer
Aaron Lazar
27 Jul 2018
5 min read
Save for later

Hey hey, I wanna be a Rockstar (Developer)

Aaron Lazar
27 Jul 2018
5 min read
New programming languages keep popping up every now and then, but here’s something that’s out of the box - jukebox to be precise! If you’ve ever dressed up (or at least thought of it) in leather tights, a leather jacket, with an axe strung around your neck, belting out your favourite numbers, you’re probably going to love this! Somebody...no not Nickelback, created a language that is designed for creating computer programs using song lyrics! The language is called...hold your breath...Rockstar! Say, what?? Are you kidding me? Is this some kind of joke/’fake news’? No, it’s not. It’s as real as Kurt writing those songs she sang in Hole! ;) Rockstar is heavily influenced by the lyrical conventions of 1980’s hard rock and power ballads. And the somebody who created it is Dylan Beattie, a Microsoft MVP for Visual Studio and Development Technologies. Unsurprisingly, Dylan’s a musician himself. Rockstar is already growing in popularity! Will you take a look at the growth on Github and the discussions going on on Reddit? You ask why would Dylan do such a thing? Cos, as Van Halen would say, “Everybody Wants Some”! Well, he thought it would be cool to have such a language, where you can use your favourite lyrics to drive your computer and HR recruiters nuts! It’s mainly part of a movement to force recruiters from using the term, “Rockstar Programmers”. Did I say movement? Rockstar supports a unique feature known as poetic literals, which allow programmers to simultaneously initialize a variable and express their innermost angst. I’m sure Billie Joe Armstrong and Axl Rose will surely appreciate this! This is what sample Rockstar code looks like, solving the fizzbuzz problem: Let’s start with the minimalistic version: Modulus takes Number and Divisor While Number is as high as Divisor Put Number minus Divisor into Number (blank line ending While block) Give back Number (blank line ending function declaration) Limit is 100 Counter is 0 Fizz is 3 Buzz is 5 Until Counter is Limit Build Counter up If Modulus taking Counter, Fizz is 0 and Modulus taking Counter, Buzz is 0 Say "FizzBuzz!" Continue (blank line ending 'If' Block) If Modulus taking Counter and Fizz is 0 Say "Fizz!" Continue (blank line ending 'If' Block) If Modulus taking Counter and Buzz is 0 Say "Buzz!" Continue (blank line ending 'If' Block) Say Counter (EOL ending Until block) And now, the same thing in idiomatic Rockstar code: Midnight takes your heart and your soul While your heart is as high as your soul Put your heart without your soul into your heart Give back your heart Desire is a lovestruck ladykiller My world is nothing Fire is ice Hate is water Until my world is Desire, Build my world up If Midnight taking my world, Fire is nothing and Midnight taking my world, Hate is nothing Shout "FizzBuzz!" Take it to the top If Midnight taking my world, Fire is nothing Shout "Fizz!" Take it to the top If Midnight taking my world, Hate is nothing Say "Buzz!" Take it to the top Whisper my world Oh yeah, did I mention that Rockstar doesn’t care two hoots about indentation. Also, it discourages the use of comments. Why? Cos this is Rock ‘n’ Roll, baby! Let whoever wants to know the meaning, discover it on their own! Now that’s hardcore! To declare a variable in Rockstar, you simply use a common word like "a, an, the, my or your" as a preface and any unique name (e.g. "Suzanne"). For types, you can use words like "mysterious", meaning no value is assigned, or "nothing/ nowhere/nobody", for null. You could name your variable “em” so to increment it, you’d use "build em up" and to decrement it, you’d use "knock em down". Now if that’s not cool, you tell me what is! Like in Ruby or Python, variables are dynamically typed and you don't need to declare them before use. That’s not all! For I/O, you’re at the liberty of using words like "listen to" or "shout," "whisper" or "scream". Someone actually happened to test out the error handling capabilities of Rockstar, a couple of days ago: If you accidentally typed “!love” as a property, it will return “you give !love a bad name”. I wonder what it would do, if we just typed in the lyrics to Sweet Child o’ Mine. Nevertheless, the Github (Shooting) Stars are growing like a weed (pun intended) ;) I suggest you Don’t Stop Believin’ in it and go check this language out! And don’t forget to tell us in the comments, about how it Rock(ed) You Like a Hurricane or better yet, Shook Me You All Night Long! ;) Will Rust Replace C++? Why Guido van Rossum quit as the Python chief (BDFL) Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’
Read more
  • 0
  • 0
  • 16435

article-image-amazon-devices-echo-device-lineup-alexa-presentation-language
Sugandha Lahoti
21 Sep 2018
4 min read
Save for later

It’s Day 1 for Amazon Devices: Amazon expands its Echo device lineup, previews Alexa Presentation Language and more

Sugandha Lahoti
21 Sep 2018
4 min read
Amazon has unveiled a range of Echo devices at the Amazon Devices Event hosted in their Seattle headquarters, yesterday. The products announced included a revamped selection of Amazon’s smart speakers ( Echo Sub, Echo Dot, and Echo Plus), smart displays (the Echo Show and Echo Spot), and other smart devices. Also released, was a smart microwave (AmazonBasics Microwave), Echo Wall Clock, Fire TV Recast, and Amazon Smart Plug This event marks the largest number of devices and features (over 30) that Amazon has ever launched in a day. Alexa Presentation Language For developers, Amazon introduced the Alexa Presentation Language, to easily create Alexa skills for Alexa devices with screens. The Alexa Presentation Language (APL) is in preview and allows developers to build voice experiences with graphics, images, slideshows and video. Developers will be able to control how graphics flow with voice, customize visuals and adapt them to Alexa devices and skills.  Supported devices will include Echo Show, Echo Spot, Fire TV, and select Fire Tablet devices. Now let’s take a broad look at the key device announcements. Amazon Smart Speakers Echo Dot: The new version of the Smart speaker now offers 70 percent louder sound as compared to its predecessor. It is a voice-controlled smart speaker with Alexa integration. It can sort music, news, information, and more. The driver is now much larger from 1.1” to 1.6” for better sound clarity and improved bass. It is Bluetooth enabled so you can connect to another speaker or use it all by itself. Echo Input:  If you already have speakers, this device can add Alexa voice control to them via a 3.5mm audio cable or Bluetooth. It has a four-microphone array. Echo Input is just 12.5mm tall and thin enough to disappear into the room. It will be available later this year for $34.99. Echo Plus: Echo Plus combines Amazon’s cloud-based Natural Language Understanding and Automatic Speech Recognition along with built-in Zigbee hub to make it one of the premier smart speakers. It also has a new fabric casing, and built-in temperature sensor. This model's pre-orders begin today for $149.99. Echo Link: The Echo Link device can connect to a receiver or amplifier, with multiple digital and analog inputs and outputs for compatibility with your existing stereo equipment. It can control music selection, volume, and multi-room playback on your stereo with your Echo or the Alexa app. Echo Link will be available to customers soon. Echo Sub: This 100-watt subwoofer can connect to other speakers and create a 2.1-sound solution. The $129.99 Echo Sub will launch later this month with pre-orders beginning today. Amazon Smart Displays Echo Show: The new Echo Show is completely redesigned with a larger screen, smart home hub, and improved sound quality. Amazon is also introducing Doorbell Chime Announcements, so users will hear a chime on all Echo devices when someone presses your smart doorbell. Echo Show includes a high resolution 10-inch HD display and an 8-mic array. The new Echo Show will be available to customers for $229.99. Shipping starts next month. Other Smart devices Echo Wall Clock: It is a $30 Echo companion device, an analog clock with Alexa-powered voice recognition. It is 10-inch, battery-powered and features a ring of 60 LEDs around the rim that show ongoing Alexa timers. It also has automatic time syncing and Daylight Savings Time adjustment. AmazonBasics Microwave: It’s a $59.99 voice-activated microwave. It features Dash Replenishment and an array of Alexa features including integration with connected ovens, door locks, and other smart fixtures, reminders, and access to more than 50,000 third-party skills. Fire TV Recast: This is a companion DVR that lets users watch, record, and replay free over-the-air programming to any Fire TV, Echo Show, and on compatible Fire tablet and mobile devices. Users can also record up to two or four shows at once, and stream on any two devices at a time. It can also be paired with Alexa. Amazon Smart Plug: The Amazon Smart Plug works with Alexa to add voice control to any outlet. You can schedule lights, fans, and appliances to turn on and off automatically, or control them remotely when you’re away. Follow along the live blog of the event for a minute to minute update. Google to allegedly launch a new Smart home device. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration. The iRobot Roomba i7+ is a cleaning robot that maps and stores your house and also empties the trash automatically.
Read more
  • 0
  • 0
  • 16435

article-image-apple-t2-security-chip-has-touch-id-security-enclave-hardware-to-prevent-microphone-eavesdropping-amongst-many-other-features
Melisha Dsouza
31 Oct 2018
4 min read
Save for later

Apple T2 security chip has Touch ID, Security Enclave, hardware to prevent microphone eavesdropping, amongst many other features!

Melisha Dsouza
31 Oct 2018
4 min read
Apple’s special event held in Brooklyn yesterday, saw the unveiling of a host of new hardware and software including the MacBook Air 2018 and the Mac mini. Along with this, Apple also published a complete security overview white paper that minutely lists the details of its T2 security chip incorporated into the  Mac mini and MacBook Air. The chip disconnects the device’s microphone when the laptop is closed. It also prevents tampering of data while introducing a strict level of security for its devices. Let’s look at features of this chip that caught our attention. #1 Disabling the microphone on closing the laptop One of the major features of the T2 chip is disconnecting the device’s microphone when the laptop is closed. The chip first introduced in last year's iMac Pro, is upgraded to prevent any kind of malware from eavesdropping on a user’s conversation once the laptop’s lid is shut. Apple further notes that the camera is not disabled because, the field of view of the lens is completely obstructed while the lid is closed #2 Security Enclave The Secure Enclave is a coprocessor incorporated within the system on chip (SoC) of the Apple T2 Security Chip. IIt provides dedicated security by protecting the necessary cryptographic keys for FileVault and secure boot. What's more? It processes fingerprint data from the Touch ID sensor and checks if a match is present. Apple further mentions that its limited function is a virtue: “Security is enhanced by the fact that the hardware is limited to specific operations.” #3 Storage Encryption The Apple T2 Security Chip has a dedicated AES crypto engine built into the DMA path between the flash storage and main system memory. It makes it really efficient to perform internal volume encryption using FileVault with AES-XTS . The Mac unique ID (UID) and a device group ID (GID) are AES 256-bit keys included in the Secure Enclave during manufacturing. It is designed in such a way that no software or firmware can read the keys directly. The keys can be used only by the AES engine dedicated to the Secure Enclave. The UID is unique to each device and is generated completely within the Secure Enclave rather than in a manufacturing system outside of the device. Hence, the UID key isn’t available for access or storage by Apple or any Apple suppliers. Software that is run on the Secure Enclave takes advantage of the UID to protect Touch ID data, FileVault class keys, and the Keychain. #4 Touch ID The T2 chip processes the data from the Touch ID to authenticate a user. The Touch ID is a mathematical representation of the fingerprint which is encrypted and stored on the device. It is then protected with a key available only to the Secure Enclave which is used to  verify a match with the enrolled information. The data cannot be accessed by macOS or by any apps running on it and is never stored on Apple servers, nor is it backed up to iCloud. Thus ensuring that only authenticated users can access the device. #5 Secure Boot The T2 Security Chip ensures that each step of the startup process contains components that cryptographically signed by Apple to verify integrity. The boot process proceeds only after verifying the integrity of the software at every step. When a Mac computer with the T2 chip is turned on, the chip will execute code from read-only memory known as the Boot ROM. This unchangeable code, referred to as the hardware root of trust, is laid down during chip fabrication and audited for vulnerabilities to ensure all-round security of the process. These robust features of the T2 chip is definitely something to watch out for. You can read the whitepaper to understand more about the chip’s features. Apple and Amazon take punitive action against Bloomberg’s ‘misinformed’ hacking story Apple now allows U.S. users to download their personal data via its online privacy data portal Could Apple’s latest acquisition yesterday of an AR lens maker signal its big plans for its secret Apple car?
Read more
  • 0
  • 0
  • 16431

article-image-data-boosts-confidence-and-resilience-in-companies-despite-the-uncertain-global-economic-climate-yougov-survey-in-asia-pacific-japan-finds-from-whats-new
Anonymous
07 Dec 2020
7 min read
Save for later

Data boosts confidence and resilience in companies despite the uncertain global economic climate: YouGov survey in Asia Pacific & Japan finds from What's New

Anonymous
07 Dec 2020
7 min read
JY Pook Senior Vice President, Asia Pacific, Tableau Kristin Adderson December 7, 2020 - 3:59pm December 7, 2020 Especially after the long bumpy ride we have all been on since the start of 2020, and that we continue to live through the pandemic, can businesses really be optimistic about their future health? It can be particularly tough for businesses in Asia Pacific & Japan (APJ) who have been weathering this storm the longest. The chain of events triggered by the public health pandemic that dealt shockwaves like we’ve never seen are still reverberating today. But as we approach the end of the year, APJ is finally on a slow path of recovery, even though progress is uneven across markets. Business leaders can now apply learnings from the first phase of the pandemic to get a better grip on their business, but the predominant sentiment remains one of caution. With reports of new waves of infections disrupting economic recovery, the future remains uncertain. Still, there are business leaders who are feeling optimistic about the next six months - those from data-driven organisations. Many of whom have been encouraged by the critical advantages that data has brought about for their organisation during the pandemic, empowering them to come out of the crisis better than others.  Being data-driven fuels optimism for the future More data-driven companies (63%) are optimistic about the future health of their business in the next six months than non data-driven companies (37%). This is the most notable finding we uncovered in a recent study conducted in conjunction with YouGov, which surveyed more than 2,500 medium level managers or higher and IT decision makers across four markets in Asia Pacific (Singapore, Australia, India and Japan).  Business leaders across various industries were questioned about their use of data during the pandemic, lessons learnt and confidence in the future health of their organisation. Overwhelmingly, we found that data-driven organisations are more resilient and confident during the pandemic, and this is what fuels the optimism for the future health of their business. 82 percent of data-driven companies in APJ have reported critical business advantages during the pandemic. The findings show multiple and vast benefits when organisations tap on data:  ●    being able to make strategic business decisions faster (54%) ●    more effective communication with stakeholders (54%) ●    increased cross-team collaboration (51%) and  ●    making their business more agile (46%) Bank Mandiri, one of the leading financial institutions in Indonesia, is a great example of such data-driven organisations. Data enabled the bank to quickly gain visibility on the evolving situation, and respond in accordance to ensure business continuity for its customers.  At the height of the pandemic when many of its customers began facing cash flow problems, the bank tapped into data sources, built data squads and created key dashboards focused on real-time liquidity monitoring and a law restructuring programme, all within a matter of 48 hours. The Tableau solution allowed Bank Mandiri to increase flexibility in their operations, and customers’ suitability for their new loan restructuring program. In doing so, they could ensure that customers still carry out their financial transactions and receive support on their financial and loan repayment needs. What is troubling is that across the region, there remains a disconnect in how businesses value and use data. In contrast to organisations like Bank Mandiri, only 39% of non data-driven companies recognise data as a critical advantage. This is in spite of how the pandemic has further asserted the role of data in society today, and as we enter the era of analytics ubiquity.  In the coming year, the use of data will set companies even further apart. A strong data culture is no longer a nice-to-have, but rather a must-have for organisations. There needs to be a mindset shift in non data-driven organisations, where they need to get all hands on data. Explore the full dashboard here. Investment in data skills key to gaining competitive advantage One of the fundamental areas of focus for organisations during the pandemic is retaining and investing in its people. On this front, data-driven companies are again leading the charge - 82 percent of them eager to increase or continue their existing level of data skills investment in employees over the next six months.  Worryingly, 32 percent of non-data driven organisations have opted to either reduce or not invest in data skills at all. These non data-driven companies are at high risk of being at a disadvantage. At this critical time when it is a necessity for organisations to remain agile and adaptable, employees must have the requisite data skills to make both strategic and tactical decisions backed by insights, to future-proof their organisation for the challenges that lie ahead.  Take Zuellig Pharma, for instance. As one of the largest healthcare service providers in the region, Zuellig is deeply committed to investing in data skills training for its employees - through various programmes such as Tableau and automation training, as well as self-directed learning on its online Academy. These efforts have paid off well during the pandemic - exemplified by a critical mass of people within the organisation who embeds data practices and assets into everyday business processes. Instead of relying on the analytics team, even ground level staff such as warehouse operators have the competency to review and analyse data through Tableau and understand how warehouse processes map against business goals. An empowered workforce gives the organisation more confidence in planning, preparing and overcoming new operational challenges brought about by the pandemic.  Aside from investing in data skills, business leaders must also look into developing a more holistic data strategy as they increasingly incorporate data in their business processes. The survey found that the other top lessons learnt from the pandemic include the need for better data quality (46%), data transparency (43%), followed by the need for agility (41%). Organisations must take these into consideration as they plan for the year ahead.  Building business resilience with data analytics, starting now With uneven recovery and prevailing uncertainty across the region, it is more important than ever for business leaders to build operational resilience and business agility with data insights. For leaders who worry that they have yet to establish a data-driven organisation, it is never too late to embark on the data journey - and the best time to act is now.  The truth is, becoming a data-driven organisation does not require dramatic changes right off the bat. Business leaders can start by taking action with data that already sits within the organisation, and empower its workforce with the necessary data skills and tools. Over time, these steps can set off a chain reaction and culminate in communities centered around making data-first decisions, which can contribute to the larger cultural shift and better business outcomes.  Looking externally, other data-driven organisations like ZALORA can also offer inspiration and lessons on how to drive organisational transformation with data. Even amidst a difficult time like a global pandemic, data has provided the means for the company to diversify its product offerings and unlock new revenue streams. Earlier this year, it introduced TRENDER, an embedded analytics solution, to provide brand partners on its platform with real-time insights and trends on sales performance. Data has helped ZALORA to provide value-added solutions for its brand partners, and stay relevant and competitive in the retail scene.  Find out more about our YouGov research and how to get started on your data journey here.
Read more
  • 0
  • 0
  • 16411
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-circleci-reports-of-a-security-breach-and-malicious-database-in-a-third-party-vendor-account
Amrata Joshi
05 Sep 2019
4 min read
Save for later

CircleCI reports of a security breach and malicious database in a third-party vendor account

Amrata Joshi
05 Sep 2019
4 min read
Last week, the team at CircleCI came across with a security breach incident that involved CircleCI and a third-party analytics vendor. An attacker got access to the user data including usernames, email addresses that were associated with GitHub and Bitbucket, user IP addresses as well as user-agent strings from their third-party vendor account.  According to the CircleCI team, information about repository URLs and names, organization name, branch names, and repository owners might have got exposed during this incident. CircleCI user secrets, build artifacts, source code,  build logs, or any other production data wasn’t accessed during this incident. Data regarding the auth tokens, password hashes, credit card or financial information also wasn’t assessed.  The security and the engineering teams at CircleCI revoked the access of the compromised user and further launched an investigation. The official page reads, “CircleCI does not collect social security numbers or credit card information; therefore, it is highly unlikely that this incident would result in identity theft.” How did the security breach occur? The incident took place on 31st August at 2:32 p.m. UTC and it came in the notice when a CircleCI team member saw an email notification about the incident from one of their third-party analytics vendors. And it was then suspected that some unusual activity was taking place in a particular vendor account.  The employee then forwarded the email to their security and engineering teams after which the investigation started and steps were taken in order to control the situation.  According to CircleCI’s engineering team, the added database was not a CircleCI resource. The team then removed the malicious database and the compromised user from the tool and further reached out to the third-party vendor to collaborate on the investigation.  At 2:43 p.m. UTC, the security teams started disabling the improperly accessed account and by 3:00 p.m. UTC, this process ended. According to the team, the customers who accessed the platform between June 30, 2019, and August 31, 2019, could possibly be affected. The page further reads, “In the interest of transparency, we are notifying affected CircleCI users of the incident via email and will provide relevant updates on the FAQ page as they become available.” CircleCI will strengthen its platform’s security The team will continue to collaborate with the third-party vendor so that they can find out the exact vulnerability that caused the incident. The team will review their policies for enforcing 2FA on third-party accounts and continue their transition to single sign-on (SSO) for all of their integrations. This year, the team also doubled the size of their security team. The official post reads, “Our security team is taking steps to further enhance our security practices to protect our customers, and we are looking into engaging a third-party digital forensics firm to assist us in the investigation and further remediation efforts. While the investigation is ongoing, we believe the attacker poses no further risk at this time.” The page further reads, “However, this is no excuse for failing to adequately protect user data, and we would like to apologize to the affected users. We hope that our remediations and internal audits are able to prevent incidents like this and minimize exposures in the future. We know that perfect security is an impossible goal, and while we can’t promise that, we can promise to do better.” Few users on HackerNews discuss how CircleCI has taken user's data and its security for granted by handing it over to the third party.  A user commented on HackerNews, “What's sad about this is that CircleCI actually has a great product and is one of the nicest ways to do end to end automation for mobile development/releases. Having their pipeline in place actually feels quite liberating. The sad part is that they take this for granted and liberate all your data and security weaknesses too to unknown third parties for either a weird ideological reason about interoperability or a small marginal profit.” Few others are appreciating the company’s efforts for resolving the issue. Another user commented, “This is how you handle a security notification. Well done CircleCI, looking forward to the full postmortem.” What’s new in security this week? CircleCI Over 47K Supermicro servers’ BMCs are prone to USBAnywhere, a remote virtual media vulnerability Cryptographic key of Facebook’s Free Basics app has been compromised Retadup, a malicious worm infecting 850k Windows machines, self-destructs in a joint effort by Avast and the French police
Read more
  • 0
  • 0
  • 16404

article-image-android-things-is-now-more-inclined-towards-smart-displays-and-speakers-than-general-purpose-iot-devices
Amrata Joshi
13 Feb 2019
2 min read
Save for later

Android Things is now more inclined towards smart displays and speakers than general purpose IoT devices

Amrata Joshi
13 Feb 2019
2 min read
The Android Things platform was launched in 2018 to power third-party smart displays and other speakers. Last year, a number of major manufacturers, including, Lenovo, JBL, and LG Smart Displays, released Smart Displays and speakers powered by Android Things. With the success achieved because of the smart displays and speakers over the past year, Google is now refocusing Android Things as a platform for OEM partners to build devices in those categories with Assistant built-in. They announced an update to Android Things in a blog post by Dave Smith, Developer Advocate for IoT, yesterday. Android Things uses the Android Things SDK on top of hardware like the NXP i.MX7D and Raspberry Pi 3B. According to Google’s blog post, Android Things is a platform for experimenting with and building smart connected devices. System images will be available through the Android Things console where developers can easily create new builds and push app updates for up to 100 devices for non-commercial use. Though, support for production System on Modules (SoMs) based on NXP, Qualcomm, and MediaTek hardware won’t be available through the public developer platform currently. This refocus doesn’t seem to be on lines with Google’s original vision for Android Things which is Internet-of-Things. Even Google’s Internet-of-Things OS called Brillo got rebranded to Android Things in late 2016. The focus seems to be on smart displays and smart speakers now. https://twitter.com/stshank/status/1095434162977165312 Google’s official post states that the team will continue to provide the platform for IoT devices, including turnkey hardware solutions. It is also pushing developers interested in turnkey hardware solutions to Cloud IoT Core for secure device connectivity at scale and the upcoming Cloud IoT Edge runtime for managed edge computing services. Google hasn’t stated any reasons for the shift of Android Things from general purpose IoT devices to smart displays and speakers, but the rising competition could be one of the reasons. According to a few users, it is bad news as Google keeps killing its good projects. https://twitter.com/aliumujib/status/1095416461345124352 https://twitter.com/gregoriopalama/status/1095591673445433344 Google announces the general availability of a new API for Google Docs Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Youtube promises o reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’
Read more
  • 0
  • 0
  • 16402

article-image-microsoft-contractors-also-listen-to-skype-and-cortana-audio-recordings-joining-amazon-google-and-apple-in-privacy-violation-scandals
Savia Lobo
12 Aug 2019
5 min read
Save for later

Microsoft contractors also listen to Skype and Cortana audio recordings, joining Amazon, Google and Apple in privacy violation scandals

Savia Lobo
12 Aug 2019
5 min read
In a recent report, Motherboard reveals, “Contractors working for Microsoft are listening to personal conversations of Skype users conducted through the app's translation service.” This allegation was done on the basis of a cache of internal documents, screenshots, and audio recordings obtained by Motherboard. These files also reveal that the contractors were also listening to voice commands given to its Cortana. While Skype FAQs does mention that it collects and uses conversations to improve products and services and also that company may analyze audio of phone calls that a user wants to translate in order to improve the chat platform's services; however, it nowhere informs users that some of the voice analysis may be done manually. Earlier this year, Apple, Amazon, and Google faced scrutiny over how they handle user’s voice data obtained from their respective voice assistants. After the Guardian’s investigation into Apple employees’ listening in on Siri conversations was published, Apple announced it has temporarily suspended human transcribers to listen to conversations users had with Siri. Google agreed to stop listening in and transcribing Google Assistant recordings for three months in Europe. Google’s decision to halt its review process was disclosed after a German privacy regulator started investigating the program after “a contractor working as a Dutch language reviewer handed more than 1,000 recordings to the Belgian news site VRT which was then able to identify some of the people in the clips.” TechCrunch reports. On the other hand, Amazon now allows users to opt-out of the program that allows contractors to manually review voice data. Bloomberg was the first to report in April that “Amazon had a team of thousands of workers around the world listening to Alexa audio requests with the goal of improving the software”. The anonymous Microsoft contractor who shared the cache of files with Motherboard said, “The fact that I can even share some of this with you shows how lax things are in terms of protecting user data.” In an online chat, Frederike Kaltheuner, data exploitation program lead at activist group Privacy International, told Motherboard, “People use Skype to call their lovers, interview for jobs, or connect with their families abroad. Companies should be 100% transparent about the ways people's conversations are recorded and how these recordings are being used." She further added, “If a sample of your voice is going to human review (for whatever reason) the system should ask them whether you are ok with that, or at least give you the option to opt-out." Pat Walshe, an activist from Privacy Matters, in an online chat with Motherboard said, "The marketing blurb for [Skype Translator] refers to the use of AI not humans listening in. This whole area needs a regulatory review." "I’ve looked at it (Skype Translator FAQ) and don’t believe it amounts to transparent and fair processing," he added. A Microsoft spokesperson told Motherboard in an emailed statement, "Microsoft collects voice data to provide and improve voice-enabled services like search, voice commands, dictation or translation services. We strive to be transparent about our collection and use of voice data to ensure customers can make informed choices about when and how their voice data is used. Microsoft gets customers’ permission before collecting and using their voice data." The statement continues, "We also put in place several procedures designed to prioritize users’ privacy before sharing this data with our vendors, including de-identifying data, requiring non-disclosure agreements with vendors and their employees, and requiring that vendors meet the high privacy standards set out in European law. We continue to review the way we handle voice data to ensure we make options as clear as possible to customers and provide strong privacy protections."  How safe is user data with these smart assistants looped with manual assistance? According to the documents and screenshots, when a contractor is given a piece of audio to transcribe, they are also given a set of approximate translations generated by Skype's translation system. “The contractor then needs to select the most accurate translation or provide their own, and the audio is treated as confidential Microsoft information, the screenshots show,” Motherboard reports. Microsoft said this data is only available to the transcribers “through a secure online portal, and that the company takes steps to remove identifying information such as user or device identification numbers.” The contractor told Motherboard, "Some stuff I've heard could clearly be described as phone sex. I've heard people entering full addresses in Cortana commands or asking Cortana to provide search returns on pornography queries. While I don't know exactly what one could do with this information, it seems odd to me that it isn't being handled in a more controlled environment."  In such an environment users no longer feel safe even after the company’s FAQ assures them that their data is safe but actually being listened to. A user on Reddit commented, “Pretty sad that we can not have a secure, private conversation from one place to another, anymore, without taking extraordinary measures, which congress also soon wants to poke holes in, by mandating back doors in these systems.” https://twitter.com/masonremaley/status/1159140919247036416 After this revelation, people may take steps in a jiffy like uninstalling Skype or not sharing extra personal details in the vicinity of their smart home devices. However, such steps won’t erase everything the transcribers might have heard in the past. Will this effect also result in a reduction in sales of the smart home devices that will directly affect the IoT market for each company that offers it? https://twitter.com/RidT/status/1159101690861301760 To know more about this news in detail, read the Motherboard’s report. Microsoft reveals Russian hackers “Fancy Bear” are the culprit for IoT network breach in the U.S. Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless
Read more
  • 0
  • 0
  • 16402

article-image-google-ai-engineers-introduce-translatotron-an-end-to-end-speech-to-speech-translation-model
Amrata Joshi
17 May 2019
3 min read
Save for later

Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model

Amrata Joshi
17 May 2019
3 min read
Just two days ago, the research team at Google AI introduced Translatotron, an end to end, speech to speech translation model.  In their research paper, “Direct speech-to-speech translation with a sequence-to-sequence model” they demonstrated the Translatotron and realized that the model achieves high translation quality on two Spanish-to-English datasets. Speech-to-speech translation systems have usually been broken into three separate components: Automatic speech recognition: It used to transcribe the source speech as text. Machine translation: It is used for translating the transcribed text into the target language Text-to-speech synthesis (TTS): It is used to generate speech in the target language from the translated text. Dividing the task into such systems have been working successfully and have powered many commercial speech-to-speech translation products, including Google Translate. In 2016, most of the engineers and researchers realized the need for end-to-end models on speech translation when researchers demonstrated the feasibility of using a single sequence-to-sequence model for speech-to-text translation. In 2017, the Google AI team demonstrated that such end-to-end models can outperform cascade models. Recently, many approaches for improving end-to-end speech-to-text translation models have been proposed. Translatotron demonstrates that a single sequence-to-sequence model can directly translate speech from one language into another. Also, it doesn’t rely on an intermediate text representation in either language, as required in cascaded systems. It is based on a sequence-to-sequence network that takes source spectrograms as input and then generates spectrograms of the translated content in the target language. Translatotron also makes use of two separately trained components: a neural vocoder that converts output spectrograms to time-domain waveforms and a speaker encoder, which is used to maintain the source speaker’s voice in the synthesized translated speech. The sequence-to-sequence model uses a multitask objective for predicting source and target transcripts and generates target spectrograms during training. But during the inference, no no transcripts or other intermediate text representations are used. The engineers at Google AI validated Translatotron’s translation quality by measuring the BLEU (bilingual evaluation understudy) score, computed with text transcribed by a speech recognition system. The results do lag behind a conventional cascade system but the engineers have managed to demonstrate the feasibility of the end-to-end direct speech-to-speech translation. Translatotron can retain the original speaker’s vocal characteristics in the translated speech by incorporating a speaker encoder network. This makes the translated speech sound natural and less jarring. According to the Google AI team, the Translatotron gives more accurate translation than the baseline cascade model, while retaining the original speaker’s vocal characteristics. The engineers concluded that Translatotron is the first end-to-end model that can directly translate speech from one language into speech in another language and can retain the source speaker’s voice in the translated speech. To know more about this news, check out the blog post by Google AI. Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation Google’s Cloud Healthcare API is now available in beta
Read more
  • 0
  • 0
  • 16400
article-image-ex-microsoft-employee-arrested-for-stealing-over-10m-from-store-credits-using-a-test-account
Savia Lobo
19 Jul 2019
4 min read
Save for later

Ex-Microsoft employee arrested for stealing over $10M from store credits using a test account

Savia Lobo
19 Jul 2019
4 min read
On Tuesday, one of Microsoft’s former employees, Volodymyr Kvashuk, 25, was arrested for attempting to steal $10 million worth of digital currency from Microsoft. “If convicted of mail fraud, the former Microsoft software engineer could face as much as 20 years in prison and a $250,000 fine”, The Register reports. Kvashuk, a Ukranian citizen residing in Renton, Washington was hired by Microsoft in August 2016 as a contractor till June 2018. He was a part of Microsoft’s Universal Store Team (UST) with a duty to handle the company's e-commerce operations. Sam Guckenheimer, product owner for Azure DevOps at Microsoft, back in 2017,  said the UST "is the main commercial engine of Microsoft with the mission to bring One Universal Store for all commerce at Microsoft.” He further explained, "The UST encompasses everything Microsoft sells and everything others sell through the company, consumer and commercial, digital and physical, subscription and transaction, via all channels and storefronts". According to the prosecution’s complaint report, filed in a US federal district court in Seattle, the UST team was assigned to make simulated purchases of products from the online store to ensure customers could make purchases without any glitches. The test accounts used to make these purchases were linked to artificial payment devices (“Test In Production” or “TIP” cards) that allowed the tester to simulate a purchase without generating an actual charge. The program was designed to block the delivery of physical goods. However, no restrictions or safeguards were placed to block the test purchases of digital currency i.e. “Currency Stored Value” or “CSV”, which could also be used to buy Microsoft products or services. Kvashuk fraudulently obtained these CSVs and resold them to third parties, which reaped him over $10,000,000 in CSV and also some property from Microsoft. Kvashuk bought these CSVs by disguising his identity with different false names and statements. According to The Register, “The scheme supposedly began in 2017 and escalated to the point that Kvashuk, on a base salary of $116,000 per year, bought himself a $162,000 Tesla and $1.6m home in Renton, Washington”. Microsoft's UST Fraud Investigation Strike Team (FIST) noticed an unexpected rise in the use of CSV to buy subscriptions to Microsoft's Xbox gaming system in February 2018. By tracing the digital funds, the investigators found out that these were resold on two different websites, to two whitelisted test accounts. FIST then traced the accounts and transactions involved. With the assistance of the US Secret Service and the Internal Revenue Service, investigators concluded that Kvashuk had defrauded Microsoft. Kvashuk had also a Bitcoin mixing service to hide his public blockchain transactions. “In addition to service provider records that point to Kvashuk, the complaint notes that Microsoft's online store uses a form of device fingerprinting called a Fuzzy Device ID. Investigators, it's claimed, linked a specific device identifier to accounts associated with Kvashuk”, according to The Register. One of the users on HackerNews mentions, “There are two technical interesting takeaways in this: 1 - Microsoft, and probably most big companies, have persistent tracking ID on most stuff that is hard to get rid of and can be used to identify you and devices linked to you in a fuzzy way. I mean, we know about super cookies, fingerprinting and such, but it's another to hear it being used to track somebody that was careful and using multiple anonymous accounts. 2 - BTC mixers will not protect you. Correlating one single wallet with you will make it possible to them retrace the entire history.” To know about this news in detail, head over to the prosecution’s complaint. Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology Microsoft mulls replacing C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features Microsoft adds Telemetry files in a “security-only update” without prior notice to users
Read more
  • 0
  • 0
  • 16398

article-image-liz-fong-jones-on-how-to-secure-ssh-with-two-factor-authentication-2fa
Savia Lobo
22 Apr 2019
4 min read
Save for later

Liz Fong-Jones on how to secure SSH with Two Factor Authentication (2FA)

Savia Lobo
22 Apr 2019
4 min read
Over the weekend, Liz Fong-Jones, a Developer Advocate at honeycomb.io posted her experience with the security hardening of honeycomb.io’s infrastructure. In her post, on GitHub, Liz explains how SSH keys, which provide authentication between hosts, can be vulnerable to different threats, which might be overlooked. Liz mentions that by adding passphrase encryption, the private keys become resistant to theft when at rest. However, when they are in use, the usability challenges of re-entering the passphrase on every connection means that “engineers began caching keys unencrypted in memory of their workstations, and worse yet, forwarding the agent to allow remote hosts to use the cached keys without further confirmation”. The Matrix breach, which took place on April 11 showcases an example of what happens when authenticated sessions are allowed to propagate without a middle-man. The intruder in the Matrix breach had access to the production databases, potentially giving them access to unencrypted message data, password hashes, and access tokens. Liz also mentions two primary ways of preventing an attacker from misusing credentials. Using a separate device that generates, using a shared secret, numerical codes that we can transfer over out of the band and enter alongside the key. Having a separate device perform all the cryptography only when physically authorized by the user. In her post, Liz asks, “What will work for a majority of developers who are used to simply loading their SSH key into the agent at the start of their login session and SSHing everywhere?” and also shares her work on how one can avoid such threats. Some pre-requisites to this that Liz mentions is, “I'm assuming that you have a publicly exposed bastion host for each environment that intermediates accesses to the rest of each environment's VPC, and use SSH keys to authenticate from laptops to the bastion and from the bastion to each VM/container in the VPC”. As a preliminary step, the user should start by enabling numerical time-based one-time password (TOTP) for SSH authentication. However, since a malicious host could impersonate the real bastion (if strict host checking isn't on), intercept the OTP, and then use it to authenticate to the real bastion, “ it's better than being wormed or compromised because you forgot to take basic measures against even a passive adversary”, Liz states. After the server and the client setup, the user needs to use Chef to populate /etc/2fa_token_keys with keys that are generated and stored securely. There are different setup methods including: Mac client setup Users with Touchbar Macs should use TouchID to authenticate logins, as they'll have their laptop and their fingers with them anyways. For instance, SeKey is an SSH Agent that allows users to authenticate to UNIX/Linux SSH servers using the Secure Enclave. Krypt.co setup for iOS and Android With the help of krypt.co, instead of generating OTPs and sending them over manually, the mobile devices can securely store our SSH keys and only remotely authorize usage (and send the signed challenge to the remote server) simply with a single click. This process is even more secure than a TOTP app so long as the user supplies appropriate parameters to force hardware coprocessor storage (NIST P-256 for iOS, and 3072-bit RSA for Android, on new enough devices). Make sure people use screen locks! Liz in her post also explores YubiKey hardware token & Linux/ChromeOS client setup. To know more about this and how to set up in detail, read Liz’s GitHub post. How to remotely monitor hosts over Telnet and SSH [Tutorial] OpenSSH, now a part of the Windows Server 2019 OpenSSH 7.9 released
Read more
  • 0
  • 0
  • 16392

article-image-google-suffers-another-outage-as-google-cloud-servers-in-the-us-east1-region-are-cut-off
Amrata Joshi
03 Jul 2019
3 min read
Save for later

Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off

Amrata Joshi
03 Jul 2019
3 min read
Yesterday, Google Cloud servers in the us-east1 region were cut off from the rest of the world as there was an issue reported with Cloud Networking and Load balancing within us-east1. These issues with Google Cloud Networking and Load Balancing have caused physical damage to multiple concurrent fiber bundles that serve network paths in us-east1. At 10:25 am PT yesterday, the status was updated that the “Customers may still observe traffic through Global Load-balancers being directed away from back-ends in us-east1 at this time.” It was later posted on the status dashboard that the mitigation work was underway for addressing the issue with Google Cloud Networking and Load Balancing in us-east1. However, the rate of errors was decreasing at the time but few users faced elevated latency. Around 4:05 pm PT, the status was updated, “The disruptions with Google Cloud Networking and Load Balancing have been root caused to physical damage to multiple concurrent fiber bundles serving network paths in us-east1, and we expect a full resolution within the next 24 hours. In the meantime, we are electively rerouting traffic to ensure that customers' services will continue to operate reliably until the affected fiber paths are repaired. Some customers may observe elevated latency during this period. We will provide another status update either as the situation warrants or by Wednesday, 2019-07-03 12:00 US/Pacific tomorrow.” This outage seems to be the second major one that hit Google's services in recent times. Last month, Google Calendar was down for nearly three hours around the world. Last month Google Cloud suffered a major outage that took down a number of Google services including YouTube, GSuite, Gmail, etc. According to a person who works on Google Cloud, the team is experiencing an issue with a subset of the fiber paths that supply the region and the team is working towards resolving the issue. They have mostly removed all the Google.com traffic out of the Region to prefer GCP customers. A Google employee commented on the HackerNews thread, “I work on Google Cloud (but I'm not in SRE, oncall, etc.). As the updates to [1] say, we're working to resolve a networking issue. The Region isn't (and wasn't) "down", but obviously network latency spiking up for external connectivity is bad. We are currently experiencing an issue with a subset of the fiber paths that supply the region. We're working on getting that restored. In the meantime, we've removed almost all Google.com traffic out of the Region to prefer GCP customers. That's why the latency increase is subsiding, as we're freeing up the fiber paths by shedding our traffic.” Google Cloud users are tensed about this outage and awaiting the services to get restored back to normal. https://twitter.com/IanFortier/status/1146079092229529600 https://twitter.com/beckynagel/status/1146133614100221952 https://twitter.com/SeaWolff/status/1146116320926359552 Ritiko, a cloud-based EHR company is also experiencing issues because of the Google Cloud outage, as they host their services there. https://twitter.com/ritikoL/status/1146121314387857408 As of now there is no further update from Google on if the outage is resolved, but they expect a full resolution within the next 24 hours. Check this space for new updates and information. Google Calendar was down for nearly three hours after a major outage Do Google Ads secretly track Stack Overflow users? Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard  
Read more
  • 0
  • 0
  • 16390
article-image-ibm-launches-industrys-first-cybersecurity-operations-center-on-wheels-for-on-demand-cybersecurity-support
Melisha Dsouza
16 Oct 2018
4 min read
Save for later

IBM launches Industry's first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support

Melisha Dsouza
16 Oct 2018
4 min read
"Having a mobile facility that allows us to bring realistic cyberattack preparation and rehearsal to a larger, global audience will be a game changer in our mission to improve incident response efforts for organizations around the world." -Caleb Barlow, vice president of Threat Intelligence at IBM Security   Yesterday (On 15th October), IBM Security announced the industry's first mobile Security Operations Center- ‘The IBM X-Force Command Cyber Tactical Operations Center’ (C-TOC). This mobile command center hosted at the back of a semi truck will travel around the U.S and Europe for cybersecurity training, preparedness, and response operations. The aim of this project is to provide an on-demand cybersecurity support, while building cybersecurity awareness and skills with professionals, students and consumers. Cybercriminals are getting smarter by the day and cyber crimes are becoming sophisticated by the hour. It is necessary for organizations to plan and rehearse their response to potential security breaches in advance. According to the 2018 Cost of a Data Breach Study, companies that respond to incidents effectively and remediate the event within 30 days can save over $1 million on the total cost of a data breach. Taking this into consideration, the C-TOC has the potential to provide immediate onsite support for clients at times when their cybersecurity needs may arise. The mobile vehicle is modeled after Tactical Operations Centers used by the military and incident command posts used by first responders. It comes with a gesture-controlled cybersecurity "watch floor," data center and conference facilities. It has self-sustaining power, satellite and cellular communications, which will provide a sterile and resilient network for investigation, response and serve as a platform for cybersecurity training. Source: IBM Source: IBM Here are some of the key takeaways that individuals can benefit from, from this mobile Security Operations center: #1 Focus on Response Training and Preparedness The C-TOC will simulate real world scenarios to depict how hackers operate- to help companies train their teams to respond to attacks. The training will cover key strategies to protect business and its resources from cyberattacks. #2 Onsite Cybersecurity Support The C-TOC is mobile and can be deployed as an on-demand Security Operation Center. It aims to provide a realistic cybersecurity experience in the industry while visiting local universities and industries to build interest in cybersecurity careers and to address other cybersecurity concerns. #3 Cyber Best Practices Laboratory The C-TOC training includes real world examples based on experiences with customers in the Cambridge Cyber Range. Attack scenarios will be designed for teams to participate in. The challenges are designed keeping in mind various pointers like: working as a team to mitigate attacks, thinking as a hacker, hands- on experience with a malicious toolset and much more #4 Supplementary Cybersecurity Operations The IBM team also aims to spread awareness on the cybersecurity workforce shortage that is anticipated soon. With an expected shortfall of nearly 2 million cybersecurity professionals by 2022, it is necessary to educate the masses about careers in security as well as help upskill current professionals in cybersecurity. This is one of the many initiatives taken by IBM to bring about awareness about the importance of mitigating cyber attacks in time. Back in 2016, IBM invested $200 million in new incident response facilities, services and software, which included the industry's first Cyber Range for the commercial sector. By real world simulation of cyber attacks and training individuals to come up with advanced defense strategies, the SOC aims to get a realistic cyberattack preparation and rehearsal to a larger, global audience. To know more about this news as well as the dates that the C-TOC will tour the U.S. and Europe, head over to IBM’s official blog. Mozilla announces $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates The Intercept says Google’s Dragonfly is closer to launch than Google would like us to believe U.S Government Accountability Office (GAO) reports U.S weapons can be easily hacked  
Read more
  • 0
  • 0
  • 16389

article-image-rigetti-computing-quantum-cloud-services-bring-quantum-computing-businesses
Sugandha Lahoti
11 Sep 2018
3 min read
Save for later

Rigetti Computing launches the first Quantum Cloud Services to bring quantum computing to businesses

Sugandha Lahoti
11 Sep 2018
3 min read
Rigetti Computing have launched Quantum Cloud Services, bringing together the best of classical and quantum computing on a single cloud platform. “What this platform achieves for the very first time is an integrated computing system that is the first quantum cloud services architecture,” says Chad Rigetti, founder and CEO. Rigetti Computing has been competing head to head with behemoths like Google and IBM to grab the quantum computing market. Last month, Rigetti unveiled plans to deploy 128 qubit chip quantum computing system, challenging Google, IBM, and Intel for leadership in this emerging technology. Prior to that, last year, in December, Rigetti developed a new quantum algorithm to supercharge unsupervised Machine Learning. Now the startup says, “the first Quantum computing is almost ready for business.” With QCS you can build and run programs combining real quantum hardware in a virtual development environment. Quantum Cloud Services will be used to address fundamental challenges in medicine, energy, business, and science. Quantum cloud Services will offer a combination of a cloud-based classical computer, its Forest development platform and access to Rigetti’s quantum backends. Chemistry: QCS can be used for predicting the properties of complex molecules and materials to design more effective medicines, energy technologies and resilient crops. Machine Learning: QCS can be used for training advanced AI on quantum computers. These will help in computer vision, pattern recognition, voice recognition and machine translation. Optimization: QCS can solve complex optimizations such as ‘job shop’ scheduling and traveling salesperson problems. This will drive critical efficiencies in businesses, military and public sector logistics, scheduling, shipping and resource allocation. Rigetti is now inviting customers to apply for free access to these systems. They have invited developers to build a real-world application that achieves quantum advantage and the first to make it wins a $1 million prize. “What we want to do is focus on the commercial utility and applicability of these machines, because ultimately that’s why this company exists,” says Rigetti. Rigetti is also partnering with a number of leading quantum computing startups including Entropica Labs, Horizon Quantum Computing, OTI Lumionics, ProteinQure, QC Ware and Riverlane Research. They have collaborated with Rigetti to build and distribute the applications through the Rigetti QCS platform. You can read more details on the Rigetti Computing official website. Quantum Computing is poised to take a quantum leap with industries and governments on its side. Did quantum computing just take a quantum leap? A two-qubit chip by UK researchers makes controlled quantum entanglements possible. Rigetti plans to deploy 128 qubit chip Quantum computer.
Read more
  • 0
  • 0
  • 16384
Modal Close icon
Modal Close icon