Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - IoT and Hardware

119 Articles
article-image-these-robot-jellyfish-are-on-a-mission-to-explore-and-guard-the-oceans
Bhagyashree R
24 Sep 2018
3 min read
Save for later

These robot jellyfish are on a mission to explore and guard the oceans

Bhagyashree R
24 Sep 2018
3 min read
Earlier last week, a team of US scientists, from Florida Atlantic University (FAU) and the US Office of Naval Research published a paper on five jellyfish robots that they have manufactured. The paper is titled Thrust force characterization of free-swimming soft robotic jellyfish. The prime motive of the scientists to build such robotic jellyfish is to track and monitor fragile marine ecosystems without causing unintentional damage to them. These soft robots can swim through openings narrower than their bodies and are powered by hydraulic silicon tentacles. These so-called ‘jelly-bots’ have the ability to squeeze through narrow openings using circular holes cut in a plexiglass plate. The design structure of ‘Jelly-bots’ Jelly-bots have a similar design to that of a moon jellyfish (Aurelia aurita) during the ephyra stage of its life cycle before they becoming a fully grown medusa. To avoid the damage to fragile biological systems by the robots, soft hydraulic network actuators are chosen. To allow the jellyfish to steer, the team uses two impeller pumps to inflate the eight tentacles. The mold models for the jellyfish robot were designed in SolidWorks and subsequently, 3D printed with an Ultimaker 2 out of PLA (polylactic acid). Each jellyfish has varying rubber hardness to test the effect it has on the propulsion efficiency. Source: IOPScience What this study was about? These jelly robots will help the scientists in determining the impact of the following factors on the measured thrust force: Actuator material Shore hardness Actuation frequency Tentacle stroke actuation amplitude The scientists found that all three of these factors significantly impact mean thrust force generation, which peaks with a half-stroke actuation amplitude at a frequency of 0.8 Hz. Results The material composition of the actuators significantly impacted the measured force produced by the jellyfish, as did the actuation frequency and stroke amplitude. The greatest forces were measured with a half-stroke amplitude at 0.8 Hz and a tentacle actuator-flap material Shore hardness composition of 30–30. In the test, the jellyfish was able to swim through the narrow openings than the nominal diameter of the robot and demonstrated the ability to swim directionally. The jellyfish robots were tested in the ocean and have the potential to monitor and explore delicate ecosystems without inadvertently damaging them. One of the scientists, Dr. Engeberg said to Tech Xplore: "In the future, we plan to incorporate environmental sensors like sonar into the robot's control algorithm, along with a navigational algorithm. This will enable it to find gaps and determine if it can swim through them." To know more in detail about the jellybots, read the research paper published by these scientists. You may also go through a  video showing jellybots functioning in deep waters. Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms MEPs pass a resolution to ban “Killer robots” 6 powerful microbots developed by researchers around the world
Read more
  • 0
  • 0
  • 12556

article-image-googles-pixel-camera-app-introduces-night-sight-to-help-click-clear-pictures-with-hdr
Amrata Joshi
15 Nov 2018
3 min read
Save for later

Google’s Pixel camera app introduces Night Sight to help click clear pictures with HDR+

Amrata Joshi
15 Nov 2018
3 min read
Yesterday, Pixel camera app launched a new feature, Night Sight to help in clicking sharp, clean photographs in very low light. It works on both, the main and selfie cameras of all three generations of Pixel phones. Also, it does not require an additional tripod or flash. How HDR+ helps Night Sight [caption id="attachment_24169" align="aligncenter" width="696"]  Image source: Google AI Blog[/caption] Night Sight features works because of HDR+. HDR+ uses computational photography for producing clearer photographs. When the shutter button is pressed, HDR+ captures a rapid burst of pictures, then quickly combines them into one. It improves results in both high dynamic range and low-light situations. It reduces the impact of read noise and shot noise thereby improving SNR (Signal to Noise Ratio) in dim lighting. To keep the photographs sharp even if the hand shakes or the subject moves, Pixel camera app uses short exposures. The pieces of frames which aren’t well aligned, get rejected. This lets HDR+ to produce sharp images even while there is excessive light. The Pixel camera app works well in both the situations, dim light or excessive light exposure. The default picture-taking mode on Pixel phones uses a zero-shutter-lag (ZSL) protocol, which limits exposure time. As soon as one opens the camera app, it starts capturing image frames and stores them in a circular buffer. This circular buffer constantly erases old frames to make room for new ones. When the shutter button is pressed, the camera sends the most recent 9 or 15 frames to the HDR+ or Super Res Zoom software. The image is captured exactly at the right moment. That’s why it is called zero-shutter-lag. No matter how dim the scene is, HDR+ limits exposures to at most 66ms, allowing the display rate of at least 15 frames per second. Night Sight uses positive-shutter-lag (PSL), for dimmer scenes where longer exposures are needed. The app uses motion metering to measure recent scene motions and chooses an exposure time to minimize the blur effect. How to use Night Sight? The Night Sight feature can't operate in complete darkness, there should be at least some light. Night Sight works better in uniform lighting than harsh lighting. Users can tap on various objects, then move the exposure slider, to increase exposure. If it’s very dark and the camera can’t focus, tap on the edge of a light source or on a high-contrast edge. Keep very bright light sources out of the field of view to avoid lens flare artifacts. The Night Sight feature has already created some buzz around. But the major drawback is that it can’t work in complete darkness. Also, since the learning-based white balancer is trained for Pixel 3, it will be less accurate on older phones. This app works better with the newer phone than the older ones. Read more about this news on Google AI Blog. The DEA and ICE reportedly plan to turn streetlights to covert surveillance cameras, says Quartz report Facebook is at it again. This time with Candidate Info where politicians can pitch on camera ‘Peekaboo’ Zero-Day Vulnerability allows hackers to access CCTV cameras, says Tenable Research
Read more
  • 0
  • 0
  • 12035

article-image-xhci-usb-3-0-issues-have-finally-been-resolved
Amrata Joshi
11 Mar 2019
2 min read
Save for later

XHCI (USB 3.0+) issues have finally been resolved!

Amrata Joshi
11 Mar 2019
2 min read
Users have been facing issues with XHCI (USB 3 host controller) bus driver since quite some time now. Last month, Waddlesplash, a team member at Haiku, worked towards fixing the XHCI bus driver. Though few users contributed some small fixes, which helped the driver to boot Haiku within QEMU. But there were still few issues that caused device lockups such as USB mouse/keyboard stalls. The kernel related issues have been resolved now. Even the devices don’t lock up now and even the performance has been greatly improved to 120MB/s on some USB3 flash drives and XHCI chipsets. Users can now try the improved driver which is more efficient. The only remaining issue is a hard-stall on boot with certain USB3 flash drives on NEC/Renesas controllers. The work related to USB2 flash drives on USB3 ports and mounting the flash drives has finished. Most of the issues related to controller initialization got fixed by hrev52772. The issues related to broken transfer finalization logic and random device stalls have been fixed. This driver will be more useful as a reference than FreeBSD’s, OpenBSD’s, or Linux’s to other OS developers. The race condition in request submission has been fixed. A dead code has been removed and the style has been cleaned. Also, the device structure has been improved now. To know more about this news, check out the Haiku’s official blog post. USB 4 will integrate Thunderbolt 3 to increase the speed to 40Gbps USB-IF launches ‘Type-C Authentication Program’ for better security Google releases two new hardware products, Coral dev board and a USB accelerator built around its Edge TPU chip
Read more
  • 0
  • 0
  • 11749

article-image-amazon-alexa-and-aws-helping-nasa-improve-their-efficiency
Gebin George
22 Jun 2018
2 min read
Save for later

Amazon Alexa and AWS helping NASA improve their efficiency

Gebin George
22 Jun 2018
2 min read
While everyone is busy playing songs and giving voice commands to Amazon Alexa, the amazing voice assistant developed by Amazon is utilized by the US space agency, NASA, to organize their data-centric tasks efficiently. Chief Technology and Innovation Officer at NASA,Tom Soderstrom, said “If you have Alexa-controlled Amazon Echo smart speaker at home, tell her to enable the 'NASA Mars' app. Once done, ask Alexa anything about the Red Planet and she will come back with all the right answers. This enables serverless computing where we don't need to build for scale but for real-life work cases and get the desired results in a much cheaper way. Remember that voice as a platform is poised to give 10 times faster results. It is kind of a virtual helpdesk. Alexa doesn't need to know where the data is stored or what the passwords are to access that data. She scans and quickly provides us what we need. The only challenge now is to figure out how to communicate better with digital assistants and chatbots to make voice a more powerful medium," emphasized Soderstrom. Serverless computing gives developers the flexibility of deploying and running applications and services without thinking about scale or server management. AWS is the market leader in providing fully-managed infrastructure services, helping organizations to focus more on product development. Alexa, for example, can help JPL (federally-funded research and development centre, managed for NASA) employees scan through 400,000 sub-contracts and get the requested copy of the contract from the data-set right on the desktop in a jiffy. JPL has also integrated conference rooms with Alexa and IoT sensors which helps them solve queries quickly. One of the JPL executives also stressed on the fact that AI is not going to take away the human jobs by saying “ AI will transform industries ranging from healthcare to retail and e-commerce and auto and transportation. Sectors that won't embrace AI will be left behind, Humans are 80 percent effective and machines are also 80 percent effective. When you bring them together, they're nearly 95 percent effective” Hence, voice controlled AI- powered digital assistants are here to stay empowering Digital Transformation. How to Add an intent to your Amazon Echo skills Microsoft commits $5 billion to IoT projects Building Voice technology on IoT projects
Read more
  • 0
  • 0
  • 11501

article-image-boston-dynamics-latest-version-of-handle-robot-designed-for-logistics
Amrata Joshi
01 Apr 2019
2 min read
Save for later

Boston Dynamics’ latest version of Handle, robot designed for logistics

Amrata Joshi
01 Apr 2019
2 min read
Boston Dynamics, an American engineering and robotics design company has come up with the latest version of the Handle robot that will be useful in factories. The company previously launched the original version of this bot in 2017. And in the same year, Boston Dynamics was sold to SoftBank, which previously was under Google’s parent company Alphabet. The latest version of Handle which is a mobile manipulation robot has been designed for logistics. https://twitter.com/BostonDynamics/status/1111371709406302209 Last week, Boston Dynamics released a video on YouTube where the Handle robot is shown loading different types of boxes. https://www.youtube.com/watch?v=5iV_hB08Uns   The robot autonomously performs SKU pallet building and depalletizing after initialization and localizing against the pallets. Handle has a vision system that tracks the marked pallets for navigation and finds individual boxes for grasping and placing. When Handle places a box onto a pallet, it uses force control to place the box against its neighbors. The robot is designed to handle boxes up to 15 Kg (33 lb). And works with pallets that are 1.2 m deep and 1.7 m tall (48 inches deep and 68 inches tall). Previously, Boston Dynamics has released interesting videos showing dog-like robots unloading dishwashers and climbing stairs, galloping Bovidae-like creatures, etc. A description on the Boston Dynamics website about Handle reads, “Handle is a robot that combines the rough-terrain capability of legs with the efficiency of wheels. It uses many of the same principles for dynamics, balance, and mobile manipulation found in the quadruped and biped robots we build, but with only 10 actuated joints, it is significantly less complex.” The Handle robot video is trending on Youtube with over 1,767,161 views. People seem to be excited about  Handle but few are being skeptical about the target audience for Handle. https://twitter.com/jesusmoses/status/1112304357482090496 While others think it will help companies e-commerce giants like Amazon and Flipkart for their smooth warehouse functioning. https://twitter.com/MRagnorok/status/1111694538303787009 Know more about Handle on the Boston Dynamics’ website. Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019 Boston Dynamics adds military-grade mortor (parkour) skills to its popular humanoid Atlas Robot How to choose components to build a basic robot  
Read more
  • 0
  • 0
  • 10947

article-image-according-to-a-report-microsoft-plans-for-new-4k-webcams-featuring-facial-recognition-to-all-windows-10-devices-in-2019
Amrata Joshi
27 Dec 2018
3 min read
Save for later

According to a report, Microsoft plans for new 4K webcams featuring facial recognition to all Windows 10 devices in 2019

Amrata Joshi
27 Dec 2018
3 min read
Microsoft plans to introduce two new webcams next year. One feature is designed to extend Windows Hello facial recognition to all the Windows 10 PCs. The other feature will work with the Xbox One, bringing back the Kinect feature that let users automatically sign in by moving in front of the camera. These webcams will be working with multiple accounts and family members. Microsoft is also planning to launch its Surface Hub 2S in 2019, an interactive, digital smart board for the modern workplace that features a USB-C port and upgradeable processor cartridges. PC users have relied on alternatives from Creative, Logitech, and Razer to bring facial recognition to desktop PCs. The planned webcams will be linked to the USB-C webcams that would ship with the Surface Hub 2, whichwill be launched next year. Though the Surface Hub 2X is expected in 2020. In an interview with The Verge in October, Microsoft Surface Chief, Panos Panay suggested that Microsoft could release USB-C webcam soon. “Look at the camera on Surface Hub 2, note it’s a USB-C-based camera, and the idea that we can bring a high fidelity camera to an experience, you can probably guess that’s going to happen,” hinted Panos in October. A camera could possibly be used to extend experience beyond its own Surface devices. The camera for Windows 10, for the first time, will bring facial recognition to all Windows 10 PCs. Currently, Windows Hello facial recognition is restricted to the built-in webcams just like the ones on Microsoft's Surface devices. According to  Windows watcher Paul Thurrott, Microsoft is making the new 4K cameras for Windows 10 PCs and its gaming console Xbox One. The webcam will return a Kinect-like feature to the Xbox One which will allow users to authenticate by putting their face in front of the camera. With the recent Windows 10 update, Microsoft enabled WebAuthn-based authentication, that helps in signing into its sites such as Office 365 with Windows Hello and security keys. The Windows Hello-compatible webcams and FIDO2, a password-less sign in with Windows Hello at the core, will be launched together next year. It would be interesting to see how the new year turns out to be for Microsoft and its users with the major releases. Microsoft urgently releases Out-of-Band patch for an active Internet Explorer remote code execution zero-day vulnerability NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”
Read more
  • 0
  • 0
  • 10236
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-infosys-and-siemens-collaborate-to-build-iot-solutions-on-mindsphere
Savia Lobo
06 Jul 2018
2 min read
Save for later

Infosys and Siemens collaborate to build IoT solutions on MindSphere

Savia Lobo
06 Jul 2018
2 min read
Infosys recently announced its partnership with Siemens to build applications for Siemens' open cloud-based IoT operating system, Mindsphere. Mindsphere connects real-world objects (industrial machinery, systems, equipments and so on) to the digital world with the help of IoT using advanced analytics. It provides industry applications and services to help businesses achieve success. With this collaboration, Infosys and Siemens will enable customers to leverage the true power of data generated by their devices. The initial focus as company plans will be on customers in the manufacturing, energy, utilities, healthcare, pharmaceutical, transportation, and logistics industry. How Infosys plans to help Siemen’s Mindsphere: Infosys plans to offer end-to-end implementation services and post-implementation support for Mindsphere It will be using its repository of Industry 4.0 accelerators, platform tools, etc. to help customers get quickly on board Will enhance customers to have an efficient experience by using data analytics features such as predictive maintenance and end-to-end factory visibility Customers will also benefit by monetizing new data-driven services Ravi Kumar S, President and Deputy COO, Infosys, says, “There is an increasing need for enterprises to accelerate their digital journey and to deliver new and innovative services. This partnership will help us bring exciting solutions to our customers that will combine strategic insights and execution excellence.” With Infosys’ expertise in the field of industrial engineering, industrial analytics, AR and VR and with Siemens’ strength in manufacturing industrial assets brings valuable digital services to customers from different sectors. Know more about the partnership alliance on the Infosys Blog post. 5 DIY IoT projects you can build under $50 Build an IoT application with Google Cloud [Tutorial] Google releases Android Things library for Google Cloud IoT Core
Read more
  • 0
  • 0
  • 10012

article-image-microsoft-surface-go-is-now-shipping
Sugandha Lahoti
03 Aug 2018
2 min read
Save for later

Microsoft Surface Go is now shipping at $399

Sugandha Lahoti
03 Aug 2018
2 min read
Microsoft’s tiny two-in-one tablet Surface Go is now shipping! Microsoft launched Surface Go last month, as the smaller, and cheaper version of its expensive counterpart, the Surface Pro device. Surface Pro is Microsoft’s response to the popular 2018 iPad and Chromebooks with features such as a 10-inch screen, front-facing camera with facial recognition, USB-C 3.1 port, and an integrated kickstand among others. Surface Go comes with Intel’s Pentium Gold 4415Y processor. Its processor is a dual-core seventh-generation model. According to Microsoft, this was chosen as it is able to provide the right balance between performance, battery life, and thermal properties which allows for a thin, fanless design. The device comes with a 10-inch display and weighs nearly 500g. The base model of the Surface Go is $399, the upgraded version, however, costs around $549 per model. Those prices don’t include the cost of the Surface keyboard, which comes for $100 extra for a basic model. You can also get the keyboard in $125 Alcantara Signature version. It also supports inking with the Surface Pen costing $99. There’s also a new $34.99 Surface Mobile Mouse. It is a two-button, ambidextrous, Bluetooth mouse which comes with a scroll wheel. It is available in colors like silver, red, and blue. For now Surface Pro is available to purchase online from the Microsoft Store in 25 countries only. Depending on the region the prices may vary. For the US, the price ranges between $399 and $599 depending on the RAM (4 or 8GB), storage (64 or 128GB), as well as the operating system (Windows 10 Home S-mode, or Windows 10 Pro). Microsoft launches Surface Go tablet at just $399 Microsoft launches a free version of its Teams app to take Slack head on Microsoft Edge introduces Web Authentication for passwordless web security
Read more
  • 0
  • 0
  • 7893

article-image-partnership-alliances-of-kontakt-io-and-iota-foundation-for-iot-and-blockchain
Savia Lobo
23 May 2018
2 min read
Save for later

Partnership alliances of Kontakt.io and IOTA Foundation for IoT and Blockchain

Savia Lobo
23 May 2018
2 min read
Kontakt.io, a leading IoT location platform provider, recently announced partnership with the IOTA Foundation, a non-profit open-source foundation which backs IOTA. This partnership aims at integrating IOTA’s next generation distributed ledger technology to Kontakt.io’s location platform and is designed specifically for condition monitoring and asset tracking. The integration will allow a tamper-proof and chargeable readings of smart sensor data. It is beneficial for healthcare operators and supply chain firms, which monitor environmental conditions for compliance reasons. They can explore fully transparent ways for storing and reporting on telemetry data. Kontakt.io’s IoT platform and IOTA’s Blockchain partnership will encrypt device-to-device and device-to-cloud communication of telemetry so the data remains intact. Consumers which include manufacturers, carriers, inspectors, technology providers, and others can leverage this new technology as it would: Increase trust and transparency Ease dispute resolution Result in better compliance breach detection and Prevent delivery of faulty products How Kontakt.io and IOTA benefit each other IOTA eliminates the cost barrier, and needs lesser computing power to confirm transactions. Unlike Blockchain or Ethereum, IOTA is capable of processing a lot of operations in real time. It scales faster depending on the amount of transactions it has queued. Hence, Proof of Work(PoW) is now possible and efficient in the IoT environment with IOTA. It is likely to become the next security standard for IoT. On the other hand, IOTA has partnered with Kontakt.io to empower the building blocks of a smart supply chain using the powerful IoT platform. Read more about this partnership at Kontakt.io official website. How to run and configure an IoT Gateway Build your first Raspberry Pi project 5 reasons to choose AWS IoT Core for your next IoT project
Read more
  • 0
  • 0
  • 6381

article-image-azure-kinect-developer-kit-is-now-generally-available-will-start-shipping-to-customers-in-the-us-and-china
Amrata Joshi
12 Jul 2019
3 min read
Save for later

Azure Kinect Developer Kit is now generally available, will start shipping to customers in the US and China

Amrata Joshi
12 Jul 2019
3 min read
In February, this year, at the Mobile World Congress (MWC), Microsoft announced the $399 Azure Kinect Developer Kit, an all-in-one perception system for computer vision and speech solutions. Recently, Microsoft announced that the kit is generally available and will begin shipping it to customers in the U.S. and China who preordered it.  The Azure Kinect Developer Kit aims to offer developers a platform to experiment with AI tools as well as help them plug into Azure’s ecosystem of machine learning services.  The Azure Kinect DK camera system features a 1MP (1,024 x 1,024 pixel) depth camera, 360-degree microphone, 12MP RGB camera that is used for additional color stream which is aligned to the depth stream, and an orientation sensor. It uses the same time-of-flight sensor that the company had developed for the second generation of its HoloLens AR visor. It also features an accelerometer and gyroscope (IMU) that helps in sensor orientation and spatial tracking. Developers can also experiment with the field of view because of the presence of a global shutter and automatic pixel gain selection. This Kit works with a range of compute types that can be used together for providing a “panoramic” understanding of the environment. This advancement might help Microsoft users in health and life sciences to experiment with depth sensing and machine learning. During the keynote, Microsoft Azure corporate vice president Julia White said, “Azure Kinect is an intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions.”  She further added, “It only makes sense for us to create a new device when we have unique capabilities or technology to help move the industry forward.” Few users are complaining about the product and expecting some changes in the future. They have highlighted issues with the mics, the SDK, the sample code and much more. A user commented on the HackerNews thread, “Then there's the problem that buries deep in the SDK is a binary blob that is the depth engine. No source, no docs, just a black box. Also, these cameras require a BIG gpu. Nothing is seemingly happening onboard. And you're at best limited to 2 kinects per usb3 controller. All that said, I'm still a very happy early adopter and will continue checking in every month or two to see if they've filled in enough critical gaps for me to build on top of.” Few others seem to be excited and think that the camera is good and will be helpful in projects. Another user commented, “This is really cool!” The user further added, “This camera is way better quality, so it'll be neat to see the sort of projects can be done now.” To know more about Azure Kinect Developer Kit, watch the video https://www.youtube.com/watch?v=jJglCYFiodI Microsoft Defender ATP detects Astaroth Trojan, a fileless, info-stealing backdoor Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities    
Read more
  • 0
  • 0
  • 6293
article-image-apple-is-ditching-butterfly-keyboard-and-opting-for-a-reliable-scissor-switch-keyboard-in-macbook-per-an-apple-analyst
Vincy Davis
05 Jul 2019
4 min read
Save for later

Apple is ditching butterfly keyboard and opting for a reliable scissor switch keyboard in MacBook, per an Apple analyst

Vincy Davis
05 Jul 2019
4 min read
Yesterday, Apple analyst Ming-Chi Kuo in a report to MacRumors, revealed that Apple is going to include a new scissor switch keyboard in the 2019 MacBook Air. The scissor switch keyboard is expected to have glass fiber to increase its durability. This means that Apple will finally do away with the butterfly keyboard, introduced in 2015, which has always been infamous for reliability and key travel issues. The MacBook Pro will also be getting the new scissor switch keyboard, but not until 2020. The scissor-switch keyboard uses a mechanism in which the keys are attached to the keyboard via two plastic pieces that interlock in a "scissors"- like fashion, and snap to the keyboard and the key. In a statement to MacRumors, Kuo says that, “Though the butterfly keyboard is still thinner than the new scissor keyboard, we think most users can't tell the difference. Furthermore, the new scissor keyboard could offer a better user experience and benefit Apple's profits; therefore, we predict that the butterfly keyboard may finally disappear in the long term.” Kuo also states that Apple’s butterfly design was expensive to manufacture due to low yields. The scissor-switch keyboard might be costly than a regular laptop keyboard, but will be cheaper than the butterfly keyboard. The scissor-switch keyboard intends to improve typing experience of Apple users. The existing butterfly keyboard has always been a controversial product, with users complaining about its durability. The butterfly keyboard design is sensitive to dust, with even the slightest particle causing keys to jam and heat issues. Last year, a class action lawsuit was filed against Apple in a federal court in California for allegedly using the flawed butterfly keyboard design in its MacBook variants since 2015. Apple has also released a tutorial on how to clean the butterfly keyboard of the MacBook or MacBook Pro. Apple has also introduced four generations of butterfly keyboards, attempting to address user complaints about stuck keys, repeated key inputs, and even the loud clacking sound of typing when striking each keycap. In March this year, Apple officially apologised for inflicting MacBook owners with its dust-prone, butterfly-switch keyboard. This apology was in response to a critical report by the Wall Street Journal's Joanna Stern about the MacBook's butterfly-switch keyboard, which can make typing the E, R, and T keys a nightmare when writing. The new scissor-switch keyboard is thus expected to be a big sigh of relief to all MacBook customers. The new scissor-switch keyboard is the same keyboard mechanism that was present in all pre-2015 MacBooks and was quite well-received by the MacBook users back then. Though the new model is expected to be a more meaningful evolution of the previous product. Kuo says the new replacement keyboard will be supplied solely by specialist laptop keyboard maker Sunrex rather than Wistron, which currently makes the butterfly keyboards for Apple. The analyst expects the new Sunrex keyboard will go for mass production in 2020 and will make the Taiwan-based firm Apple's most important keyboard supplier. Users are relieved that Apple has finally decided to ditch the butterfly keyboard. https://twitter.com/alon_gilboa/status/1146797852242448385 https://twitter.com/danaiciocan/status/1146772468432023553 https://twitter.com/najeebster/status/1146708948139106305 A user on Hacker News says that, “Finally! It took four years to admit there is something wrong. And one more year to change upcoming laptops. It‘s unbelievable how this crap could be released. Coming from a ThinkPad to an MBP in 2015 I was disappointed by the keyboard of the MBP 2015. Then switching to an MBP 2018 I was shocked how much worse things could get” Almost all of Apple’s iCloud services were down for some users, most of yesterday; now back to operation OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Apple gets into chip development and self-driving autonomous tech business
Read more
  • 0
  • 0
  • 6154

article-image-google-bids-farewell-to-its-audio-dongle-chromecast-audio
Amrata Joshi
14 Jan 2019
3 min read
Save for later

Google bids farewell to its audio dongle, Chromecast Audio

Amrata Joshi
14 Jan 2019
3 min read
Last week, Google decided to stop manufacturing its audio dongle, Chromecast Audio that allowed users to connect speakers to their Google cast setup since the company has a variety of new audio products for users.The remaining stock of the Chromecast Audio is being sold for $15 instead of $35. The Chromecast Audio dongle is designed such that it could be plugged to a regular speaker via a 3.5 mm audio cable. The device works smoothly as it can get audio from plenty of apps at a louder volume without resorting to Bluetooth. In 2015, Chromecast Audio was launched along with the second-generation Chromecast. Over the years, the Chromecast Audio evolved and also featured multi-room support. Google will still support its Chromecast Audio users for the time being. In a statement to TechCrunch, Google said, “Our product portfolio continues to evolve, and now we have a variety of products for users to enjoy audio. We have therefore stopped manufacturing our Chromecast Audio products. We will continue to offer assistance for Chromecast Audio devices, so users can continue to enjoy their music, podcasts and more.” It seems Google is more inclined towards getting people to purchase its home products, Google Assistant or cast enabled speakers from its partners. Users are giving mixed reviews to this news. Few users are now scared to invest in Google products as they think, Google often cans its products. One of the users, commented on HackerNews, “Google is really developing a reputation for starting and canning projects. I’d recommend not getting too invested in their products when possible.” Google has previously shut down a lot of services in recent years, with the latest one being the ‘Inbox’ which will shut in March, this year. Users were also unhappy when Google decided to discontinue Google Reader in 2013. But this somewhere hints Google’s tendency of shutting down its popular products. One of the comments on HackerNews, reads, “Google Reader was damn useful and is a poster child of Google's habit of hyping up useful products and then canning them.”  Few users are still in support of Google and its decision. One of the users commented, “I don't know who out there is heavily invested in a $35 audio dongle. I love mine, but it still works just as well today as it did yesterday and not being able to order more isn't causing me any anxiety.” Another user commented, “The 3-something year old hardware dongle is no longer being made, that's it. That's the entirety of the news. The Cast project as a whole is not being canned. Cast-enabled speakers, receivers, etc... are all still widely available from a wide number of manufacturers, that's not changing.” TLS comes to Google public DNS with support for DNS-over-TLS connections Researchers release unCaptcha2, a tool that uses Google’s speech-to-text API to bypass the reCAPTCHA audio challenge Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report
Read more
  • 0
  • 0
  • 5153

article-image-arm-announces-new-cpu-and-gpu-chipsets-designs-mali-g77-gpu-cortex-a77-cpu-and-much-more
Amrata Joshi
28 May 2019
3 min read
Save for later

Arm announces new CPU and GPU chipsets designs, Mali-G77 GPU, Cortex-A77 CPU, and much more!

Amrata Joshi
28 May 2019
3 min read
Yesterday, Arm, the company that has its basic chip architecture utilized by most of the smartphones, announced new designs for its premium CPU and GPU chipsets. The first actual chips are expected before the end of the year.  The company also announced the Mali-G77 GPU, the Cortex-A77 CPU, and an energy efficient machine learning processor. https://twitter.com/Arm/status/1133029847637344256 Cortex-A77 CPU With every new generation of Arm CPUs, the Cortex A77 promises to be more power efficient and provide better raw processing performance. Cortex-A77 has been built to fit in smartphone power budgets and for improving performance. It is the second generation design that brings in substantial performance upgrade over Cortex-A76. Cortex A77 has been built for next-generation laptops and smartphones and for supporting upcoming use cases like advanced ML. It will also support the range of 5G-ready devices that are set to come to the market following the 5G rollout in 2019. Due to the combination of hardware and software optimizations, the Cortex-A77 now brings better machine learning performance. It comes with more than 20 percent integer performance, more than 35 percent FP performance and more than 15 percent more memory bandwidth improvements. Mali-G77 GPU The company brings the new Mail-G77 GPU architecture, which is the first one to be based on the company’s Valhall GPU design. It offers around 1.4x performance improvement over the G76. Mail-G77 GPU is also 30 percent more energy efficient and 60% faster at running machine learning inference and neural net workloads. Mali-G77 provides uncompromising graphics performance and brings performance improvements to complex AR and ML for driving future use cases. https://twitter.com/Arm/status/1132992854282915841 Machine learning processor Arm already offers Project Trillium, its heterogeneous machine learning compute platform for the machine learning processor. Arm has improved the energy efficiency by 2x and scaled performance up to 8 cores and 32 TOP/s since the announcement of Trillium last year. The machine learning processor is based on a new architecture that targets connected devices such as augmented and virtual reality (AR/VR) devices, smartphones, smart cameras, and drones, as well as medical and consumer electronics. This processor processes a variety of neural networks such as convolutional (CNNs) and recurrent (RNNs), for image enhancements, classification, object detection, speech recognition, and natural language understanding. It also minimizes system memory bandwidth through various compression technologies. Read Also: Snips open sources Snips NLU, its Natural Language Understanding engine The company announced, “Every new smartphone experience begins with more hardware performance and features to enable developers to unleash further software innovation.” The company further added, “For developers, the CPU is more critical than ever as it not only handles general-compute tasks, as well as much of the device’s ML, compute which must scale beyond today’s limits. The same holds true for more immersive untethered AR/VR applications, and HD gaming on the go.” To know more about this news, check out Arm community’s post. Intel discloses four new vulnerabilities labeled MDS attacks affecting Intel chips The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration AI chipmaking startup ‘Graphcore’ raises $200m from BMW, Microsoft, Bosch, Dell  
Read more
  • 0
  • 0
  • 4955
article-image-introducing-open-eye-msa-consortium-by-industry-leaders-for-targeting-high-speed-optical-connectivity-applications
Amrata Joshi
09 May 2019
3 min read
Save for later

Introducing Open Eye MSA Consortium by industry leaders for targeting high-speed optical connectivity applications

Amrata Joshi
09 May 2019
3 min read
Yesterday, the Open Eye Consortium announced the establishment of its Multi-Source Agreement (MSA) to standardize advanced specifications for optical modules. These specifications are for lower latency, efficient and lower cost optical modules that target 50Gbps, 100Gbps, 200Gbps, and up to 400Gbps optical modules for datacenter interconnects over single-mode and multimode fiber. The formation of the Open Eye MSA was initiated by MACOM and Semtech Corporation with 19 current members in Promoter and Contributing membership classes. The initial specification release of this MSA is planned for Fall 2019 followed by product availability later in the year. Open Eye MSA aims towards the adoption of PAM-4 optical interconnects scaling to 50Gbps, 100Gbps, 200Gbps, and 400Gbps by expanding the existing standards. This will help optical module implementations use less complex, lower cost, lower power, and optimized clock and data recovery (CDR). The Open Eye MSA is investing in the development of an industry-standard optical interconnect that would bring interoperability among a broad group of industry-leading technology providers, including providers of lasers, electronics, and optical components. MSA consortium’s approach enables users to scale to next-generation baud rates. Dale Murray, Principal Analyst at LightCounting, said, “LightCounting forecasts that sales of next-generation Ethernet products will exceed $500 million in 2020. However, this is only possible if suppliers can meet customer requirements for cost and power consumption. The new Open Eye MSA addresses both of these critical requirements. Having low latency is an extra bonus that HPC and AI applications will benefit from.” The initial Open Eye MSA specification will be focused on 53Gbps per lane PAM-4 solutions for 50G SFP, 100G DSFP, 100G SFP-DD, 200G QSFP, and 400G QSFP-DD and OSFP single-mode modules. The subsequent specifications will aim at multimode and 100Gbps per lane applications. David (Chan Chih) Chen, AVP, Strategic Marketing for Transceiver, AOI, said, “Through its participation in the Open Eye MSA, AOI is leveraging our laser and optical module technology to deliver benefits of low cost, high-speed connectivity to next-generation data centers.” Jeffery Maki, Distinguished Engineer II, Juniper Networks, said, “As a leader in switching, routing and optical interconnects, Juniper Networks has a unique perspective into the technology and market dynamics affecting enterprise, cloud and service provider data centers, and the Open Eye MSA provides a forum to apply our insight and expertise on the pathway to 200G and faster connectivity speeds.” To know more about this news, check out, Open Eye MSA’s page. Understanding network port numbers, TCP, UDP, and ICMP on an operating system The FTC issues orders to 7 broadband companies to analyze ISP privacy practices given they are also ad-support content platforms Using statistical tools in Wireshark for packet analysis [Tutorial]  
Read more
  • 0
  • 0
  • 4665
Modal Close icon
Modal Close icon