Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-matthew-flatts-proposal-to-change-rackets-s-expressions-based-syntax-to-infix-representation-creates-a-stir-in-the-community
Bhagyashree R
09 Aug 2019
4 min read
Save for later

Matthew Flatt’s proposal to change Racket’s s-expressions based syntax to infix representation creates a stir in the community

Bhagyashree R
09 Aug 2019
4 min read
RacketCon 2019 happened last month from July 13 to 14 bringing together the Racket community to discuss ideas and future plans for the Racket programming language. Matthew Flatt, one of the core developers, graced the stage to give his talk: State of Racket. In his talk, he spoke about the growing community, performance improvements, and much more. He also touched upon his recommendation to change the surface syntax of Racket2, which has sparked a lot of discussion in the Racket community. https://www.youtube.com/watch?v=dnz6y5U0tFs&t=390 Later in July, Greg Hendershott, who has contributed Racket projects like Rackjure and Travis-Racket and has driven a lot of community participation, expressed his concern about this change in a blog post. “I’m concerned the change won’t help grow the community; instead hurt it,“ he added. He further shared that he will shift his focus towards working on other programming languages, which implies that he is stepping down as a Racket contributor. Matthew Flatt recommends surface syntax change for removing technical barriers to entry There is no official proposal about this change yet, but Flatt has discussed it a couple of times. According to Flatt’s recommendation, Racket 2’s ‘lispy’ s-expressions should be changed to something which is not a barrier of entry to new users. He suggests to get rid or reduce the use of parentheses and bring infix operators, which means the operator sign will be written in between the operands, for instance, a + b.  “More significantly, parentheses are certainly an obstacle for some potential users of Racket. Given the fact of that obstacle, it's my opinion that we should try to remove or reduce the obstacle,“ Flatt writes in a mailing list. Racket is a general-purpose, multi-paradigm programming language based on the Scheme dialect of Lisp. It is also an ecosystem for language-oriented programming. Flatt further explained his rationale behind suggesting this change that the current syntax is not only a hindrance to potential users of Racket as a programming language but also to those who want to use it as “a programming-language programming language”. He adds, “The idea of language-oriented programming (LOP) doesn't apply only to languages with parentheses, and we need to demonstrate that.” With this change, he hopes to make Racket2 more familiar and easier-to-accept for users outside the Racket community. Some Racket developers believe changing s-expressions based syntax is not “desirable” Many developers in the Racket community share a similar sentiment as Greg Hendershott. A user on Hacker News added, “Getting rid of s expressions without it being part of a more cohesive improvement (like better supporting a new type system or something) just for mainstream appeal seems like an odd choice to me.” Another user added, “A syntax without s-expressions is not an innovative feature. For me, it's not even desirable, not at all. When I'm using non-Lispy languages like Rust, Ada, Nim, and currently a lot of Go, that's despite their annoying syntactic idiosyncrasies. All of those quirky little curly braces and special symbols to save a few keystrokes. I'd much prefer if all of these languages used s-expressions. That syntax is so simple that it makes you focus on the semantics.” While others are more neutral about this suggested change. “To me, Flatt's proposal for Racket2 smells more like adding tools to better facilitate infix languages than deprecating S-expressions. Given Racket's pedagogical mission, it looks more like a move toward migrating the HtDP series of languages (Beginning Student, Intermediate Student, Intermediate Student with Lambda, and Advanced Student) to infix syntax than anything else. Not really the end of the world or a big change to the larger Racket community. Just another extension of an ecosystem that remains s-expression based despite Algol and Datalog shipping in the box,” a user expressed his opinion. To know more about this change, check out the discussion on Racket’s mailing list. Also, you can share your solutions on Racket2 RFCs. Racket 7.3 releases with improved Racket-on-Chez, refactored IO system, and more Racket 7.2, a descendant of Scheme and Lisp, is now out! Racket v7.0 is out with overhauled internals, updates to DrRacket, TypedRacket among others
Read more
  • 0
  • 0
  • 13928

article-image-uber-ai-labs-introduce-poetpaired-open-ended-trailblazer-to-generate-complex-and-diverse-learning-environments-and-their-solutions
Savia Lobo
09 Jan 2019
3 min read
Save for later

Uber AI Labs introduce POET(Paired Open-Ended Trailblazer) to generate complex and diverse learning environments and their solutions

Savia Lobo
09 Jan 2019
3 min read
Yesterday, researchers at the Uber AI Labs released the Paired Open-Ended Trailblazer (POET) algorithm that pairs the generation of environmental challenges and the optimization of agents to solve those challenges. The POET algorithm explores many different paths through the space of possible problems and solutions and, critically, allows these stepping-stone solutions to transfer between problems. The algorithms aim towards generating new tasks, optimizing solutions for them, and transferring agents between tasks to enable otherwise unobtainable advances. Researchers have applied POET to create and solve bipedal walking environments. These environments were adapted from the BipedalWalker environments in OpenAI Gym, popularized in a series of blog posts and papers by David Ha. Each environment Ei is paired with a neural network-controlled agent Ai that tries to learn to navigate through that environment. Here’s an image that depicts an example environment and agent: Source: Uber Engineering In this experiment, the POET algorithm aims to achieve two goals, which are: (1) evolve the population of environments towards diversity and complexity (2) optimize agents to solve their paired environments. During a single such run, POET generates a diverse range of complex and challenging environments, as well as their solutions. POET also periodically performs transfer experiments to explore whether an agent optimized in one environment might serve as a stepping stone to better performance in a different environment. There are two types of transfer attempts: Direct transfer: Here, the agents from the originating environment are directly evaluated in the target environment. Proposal transfer: Here, agents take one ES optimization step in the target environment. Source: Uber Engineering By testing transfers to other active environments, POET harnesses the diversity of its multiple agent-environment pairs to its full potential, i.e., without missing any opportunities to gain an advantage from existing stepping stones. Thus researchers mention that POET could invent radical new courses and solutions to them at the same time. It could similarly produce fascinating new kinds of soft robots for unique challenges it invents that only soft robots can solve. POET could also generate simulated test courses for autonomous driving that both expose unique edge cases and demonstrate solutions to them. In their blog, the researchers said that they will release the source code soon and also that “more exotic applications are conceivable, like inventing new proteins or chemical processes that perform novel functions that solve problems in a variety of application areas. Given any problem space with the potential for diverse variations, POET can blaze a trail through it”. Read more about Paired Open-Ended Trailblazer (POET) in detail in its research paper. Here’s a video that demonstrates the working of the POET algorithm: https://youtu.be/D1WWhQY9N4g Canadian court rules out Uber’s arbitration process; calls it “unconscionable” and “invalid” Uber to restart its autonomous vehicle testing, nine months after the fatal Arizona accident Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon  
Read more
  • 0
  • 0
  • 13924

article-image-15-year-old-uncovers-snapchats-secret-visual-search-function
Richard Gall
10 Jul 2018
3 min read
Save for later

15 year old uncovers Snapchat's secret visual search function

Richard Gall
10 Jul 2018
3 min read
A 15 year old app researcher has discovered something hidden inside Snapchat's code: text that reads “Press and hold to identify an object, song, barcode, and more! This works by sending data to Amazon, Shazam, and other partners." The find, uncovered by Ishan Agarwal, who sent the tip to TechCrunch, strongly suggests that Snapchat is working on a 'visual search engine' that has some kind of link to Amazon. The remarkable find pushed up Snap Inc's value on the stock market up 3%. This underlines just how much of a big deal this could be for the social media company. The feature, which is known internally as 'Eagle', indicates that Snapchat is moving quickly when it comes to developing new features. Given Snap Inc. reported a $385 million loss last quarter, this could be the shot in the arm the company needs. Snap Inc. declined to comment on Agarwal's discovery, when requested by TechCrunch. But although nothing has been confirmed, the evidence is clear enough inside the code. How Snapchat's 'Eagle' search function works It's likely that the new feature will pull up the Amazon product page, a list of sellers and reviews for the object - or product - that is snapped by the user. You’ll probably also be able to copy that Amazon link and share the info regarding the product with your friends. However, there is an element of speculation here - we'll have to wait and see when it finally launches. The TechCrunch report, links them to Snapchat's context card feature, launched towards the end of 2017. These are essentially cards which offer detailed information about, things like restaurants, such as opening times and reviews. Snapchat has been coming up with different digital commerce tools and this feature fits the bill. This is especially true if the platform is aiming to move deeper into the eCommerce world. It's worth noting that other social media platforms like Pinterest have similar features. In fact, Pinterest has already partnered with retailers like Target. In this instance, the visual search feature is directly embedded into the Target mobile app. Google Lens also works in a similar fashion. The only difference is that Snapchat will use its third-party integration (with Amazon) for the identification process. Ishan Agarwal: the 15 year old app researcher This isn't Ishan Agarwal's first software discovery. He's uncovered a number of new Instagram features, such as video calling and focus portrait mode before they were officially launched, all of which were sent as tips to TechCrunch He's clearly a valuable asset for TechCrunch - and now, given the positive movements on the stock market, an unexpectedly valuable asset for Snap too. Follow Ishan Agarwal on Twitter: @IshanAgarwal24 Source: TechCrunch Read next: There’s another player in the advertising game: augmented reality Apple’s new ARKit 2.0 brings persistent AR, shared augmented reality experiences and more
Read more
  • 0
  • 0
  • 13921

article-image-ibm-oracle-under-the-scanner-again-for-questionable-hiring-and-firing-policies
Melisha Dsouza
21 Jan 2019
5 min read
Save for later

IBM, Oracle under the scanner again for questionable hiring and firing policies

Melisha Dsouza
21 Jan 2019
5 min read
The Guardian has come forward with reports of Oracle coming under the scanner for payscale discrimination between male and female employees. On the very same day, The Register reported an affidavit has been filed against IBM for hiding the age of employees being laid off from the company from the Department of Labour. Pay scale discrimination at Oracle “Women are getting paid less across the board. These are some of the strongest statistics I’ve ever seen – amazingly powerful numbers.” -Jim Finberg, attorney for the plaintiffs On 18th January, a motion was filed against Oracle in California that alleged the company’s female employees were paid (on average) $13,000 less per year than men doing similar work, The Guardian reports. More than 4,200 women will be represented in this motion after an analysis of payroll data found that women made 3.8% less in base salaries on average, 13.2% less in bonuses, and 33.1% less in stock value as compared to male employees. The analysis also found that the payment disparities exist even for women and men with the same tenure and performance review score in the same job categories! The complaint outlines several instances from Oracle female plaintiffs who noticed the discrepancies in payment either accidentally or by chance. One of the plaintiffs saw a pay stub from a male employee that drew her attention to the wage gap between them, especially since she was the male employee’s trainer. This is not the first time that Oracle is involved in a case like this. The Guardian reports that in 2017, the US Department of Labor (DoL) filed a suit against Oracle alleging that the firm had a “systemic practice” of paying white male workers more than their counterparts in the same job titles. This led to a pay discrimination against women and black and Asian employees. Oracle dismissed these allegations and called them “without merit” stating that its pay decisions were “non-discriminatory and made based on legitimate business factors including experience and merit”. Jim Finberg, the attorney for this suite, said that none of the named plaintiffs worked at Oracle any more. Some of them left due to their frustrations over discriminatory pay. The suite also mentions that disparities in pay scale were caused because Oracle used the prior salaries of new hires to determine their compensation at the company, leading to inequalities in pay. The suit claims that Oracle was aware of its discriminatory pay and “had failed to close the gap even after the US government alleged specific problems.” The IBM Layoff Along similar lines, a former senior executive at IBM alleges in an affidavit filed on Thursday in the Southern District of New York, that her superiors directed her to hide information about the older staff being laid off by the company from the US Department of Labor. Catherine Rodgers, formerly IBM's vice president in its Global Engagement Office was terminated after nearly four decades with IBM. The Register reports that Rodgers said she believes she was fired for raising concerns that IBM was engaged in systematic age discrimination against employees over the age of 40. IBM has previously been involved in controversies of laying off older workers right after the ProPublica report of March 2018 that highlighted this fact. Rodgers, who served as VP in IBM's global engagement office and senior state executive for Nevada had access to all the people to be laid off in her group. She noticed a lot of unsettling statistics like: 1. All of the employees to be laid off from her group were over the age of 50 2. In April 2017, two employees over age 50 who had been included in the layoff, filed a request for financial assistance from the Department of Labor under the Trade Assistance Act. The DoL sent over a form asking Rodgers to state all of the employees within her group who had been laid off in the last three years along with what their ages were. This list was then reviewed with the IBM HR, and Rodgers alleges she was “directed to delete all but one name before I submitted the form to the Department of Labor. 3. Rodgers said that IBM began insisting that older staff came into the office daily. 4. Older workers were more likely to face relocation to new locations across the US. Rodgers says that after she began raising questions she got her first ever negative performance review, in spite of meeting all her targets for the year. Her workload increased without a pay rise. The plaintiffs' memorandum that accompanied the affidavit requests the court to authorize the notification of former IBM employees around the US over 40 years and lost their jobs since 2017 that they can join the legal proceedings against the company. It is bothersome to see some big names of the tech industry displaying such poor leadership morales, should these allegations prove to be true. The outcome of these lawsuits will have a significant impact on the decisions taken by other companies for employee welfare in the coming years. IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever Pwn2Own Vancouver 2019: Targets include Tesla Model 3, Oracle, Google, Apple, Microsoft, and more!
Read more
  • 0
  • 0
  • 13919

article-image-how-microsoft-365-is-now-using-artificial-intelligence-to-smartly-enrich-your-content-in-onedrive-and-sharepoint
Melisha Dsouza
29 Aug 2018
4 min read
Save for later

How Microsoft 365 is now using artificial intelligence to smartly enrich your content in OneDrive and SharePoint

Melisha Dsouza
29 Aug 2018
4 min read
Microsoft is now using the power of artificial intelligence in OneDrive and SharePoint. With this initiative, users can be more productive, make more informed decisions, keep more secure and better search capabilities. The increasing pressure on employees to be more productive in less time is challenging, especially taking into account the ever-increasing digital content. Microsoft aims to ease some of this pressure by providing smart solutions to store content. Smart features provided by Microsoft 365 Be Productive #1 Video and audio transcription Beginning later this year, Microsoft aims to introduce automated transcription services to be natively available for video and audio files in OneDrive and SharePoint. This will use the same AI technology available in Microsoft Stream. A full transcript will be shown directly in the viewer alongside a video or while listening to an audio file. Thus improving accessibility and search. This will further help users collaborate with others to improve productivity and quality of work. The video once made can be uploaded and published to Microsoft Stream. AI comes into the picture by providing in-video face detection and automatic captions. Source: Microsoft.com #2 Searching audio, video, and images As announced last September, Microsoft has unlocked the value of photos and images stored in OneDrive and SharePoint. Searching images will now be a cake walk as the native, secure AI will determine the location where the photos were taken, recognize objects, and extract text in photos. Video and audio files also become fully searchable owing to the transcription services mentioned earlier.   Source: Microsoft.com #3 Intelligent files recommendations The plans are to introduce a new files view to OneDrive and the Office.com home page to recommend relevant files to a user, sometime later in 2018. The intelligence of Microsoft Graph will access how a user works, who the user works with, and activity on content shared with the user across Microsoft 365. This information while collaborating on content in OneDrive and SharePoint will be used to suggest files to the user. The Tap feature in Word 2016 and Outlook 2016 intelligently recommends content stored in OneDrive and SharePoint by accessing the context of what the user is working on. Source: Microsoft.com Making informed decisions has never been easier The innovative AI used in OneDrive and SharePoint helps users make informed decisions while working with content. Smart features like File insights, Intelligent sharing, and Data insights are here to provide you with stats and facts to make life easier. Let’s suppose you have an important meeting at hand. File Insights helps viewers with an  ‘Inside look’ i.e. an important information at a glance to prep for the meeting. Source: Microsoft.com Intelligent sharing helps employees share relevant content like documents and presentations with meeting attendees. Source: Microsoft.com Finally, Data Insights will use information provided by cognitive services to set up custom workflows to organize images, trigger notifications, or invoke more extensive business processes directly in OneDrive and SharePoint with deep integration to Microsoft Flow.   Source: microsoft.com   Security Enhancements AI-powered OneDrive and SharePoint will help in securing content and ward off malicious attacks. ‘OneDrive files restore’ integrated with ‘Windows Defender Antivirus’ protects users from ransomware attacks by identifying breaches and guides them through remediation and file recovery. Users will be able to leverage the text extracted from photos and audio/video transcriptions by applying ‘Native data loss prevention (DLP)’ policies to automatically protect content thereby adhering to Intelligent compliance. Many Fortune 500 customers have already started supporting Microsoft’s bold vision to improvise content collaboration and are moving their content to OneDrive and SharePoint. Take a look at the official page for detailed information on Microsoft 365’s smart new features. Defending Democracy Program: How Microsoft is taking steps to curb increasing cybersecurity threats to democracy Microsoft claims it halted Russian spearphishing cyberattacks Microsoft’s .NET Core 2.1 now powers Bing.com    
Read more
  • 0
  • 0
  • 13919

article-image-uk-researchers-build-the-worlds-first-quantum-compass-to-overthrow-gps
Sugandha Lahoti
12 Nov 2018
2 min read
Save for later

UK researchers build the world’s first quantum compass to overthrow GPS

Sugandha Lahoti
12 Nov 2018
2 min read
British researchers have successfully built the world’s first standalone quantum compass, which will act as a replacement for GPS as it allows highly accurate navigation without the need for satellites. This quantum compass was built by researchers from Imperial College London and Glasgow-based laser firm M Squared. The project received funding from the UK Ministry of Defence (MoD) under the UK National Quantum Technologies Programme. The device is completely self-contained and transportable and measures how an object's velocity changes over time, by using the starting point of an object and measuring how an object's velocity changes. Thereby, it overcomes issues of traditional GPS systems, such as blockages from tall buildings or signal jamming. High precision and accuracy are achieved by measuring properties of super-cool atoms, which means any loss in accuracy is "immeasurably small". Dr. Joseph Cotter, from the Centre for Cold Matter at Imperial, said: “When the atoms are ultra-cold we have to use quantum mechanics to describe how they move, and this allows us to make what we call an atom interferometer. As the atoms fall, their wave properties are affected by the acceleration of the vehicle. Using an ‘optical ruler’, the accelerometer is able to measure these minute changes very accurately.” The first real-world application for the device could be seen in the shipping industry, The size currently is suitable for large ships or aircraft. However, researchers are already working on a miniature version that could eventually fit in a smartphone. The team is also working on using the principle behind the quantum compass for research in dark energy and gravitational waves. Dr. Graeme Malcolm, founder, and CEO of M Squared said: “This commercially viable quantum device, the accelerometer, will put the UK at the heart of the coming quantum age. The collaborative efforts to realize the potential of quantum navigation illustrate Britain’s unique strength in bringing together industry and academia – building on advancements at the frontier of science, out of the laboratory to create real-world applications for the betterment of society.” Read the press release on the Imperial College blog. Quantum computing – Trick or treat? D-Wave launches Leap, a free and real-time Quantum Cloud Service Did quantum computing just take a quantum leap? A two-qubit chip by UK researchers makes controlled quantum entanglements possible
Read more
  • 0
  • 0
  • 13914
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-neptune-aws-cloud-graph-database-is-now-generally-available
Savia Lobo
31 May 2018
2 min read
Save for later

Amazon Neptune, AWS’ cloud graph database, is now generally available

Savia Lobo
31 May 2018
2 min read
Last year, Amazon Web Services (AWS) announced the launch of its fast, reliable, and fully-managed cloud graph database, Amazon Neptune, at the Amazon Re:Invent 2017. Recently, AWS announced that Neptune is all set for the general public to explore. Graph databases store the relationships between connected data as graphs. This enables applications to access the data in a single operation, rather than a bunch of individual queries for all the data. Similarly, Neptune makes it easy for developers to build and run applications that work with highly connected datasets. Also, due to the availability of AWS managed graph database service, developers can further experience high scalability, security, durability, and availability. As Neptune becomes generally available, there are a large number of performance enhancements and updates, following are some additional upgrades: AWS CloudFormation support AWS Command Line Interface (CLI)/SDK support An update to Apache TinkerPop 3.3.2 Support for IAM roles with bulk loading from Amazon Simple Storage Service (S3) Some of the benefits of Amazon Neptune include: Neptune’s query processing engine is highly optimized for two of the leading graph models, Property Graph and W3C's RDF. It is also associated with the query languages, Apache TinkerPop Gremlin and RDF/SPARQL, providing customers the flexibility to choose the right approach based on their specific graph use case. Neptune storage scales automatically, without downtime or performance degradation as customer data increases. It allows developers to design sophisticated, interactive graph applications, which can query billions of relationships with millisecond latency. There are no upfront costs, licenses, or commitments required while using Amazon Neptune. Customers pay only for the Neptune resources they use. To know more interesting facts about Amazon Neptune in detail, visit its official blog. 2018 is the year of graph databases. Here’s why. From Graph Database to Graph Company: Neo4j’s Native Graph Platform addresses evolving needs of customers When, why and how to use Graph analytics for your big data
Read more
  • 0
  • 0
  • 13909

article-image-the-npm-engineering-team-shares-why-rust-was-the-best-choice-for-addressing-cpu-bound-bottlenecks
Bhagyashree R
04 Mar 2019
3 min read
Save for later

The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks

Bhagyashree R
04 Mar 2019
3 min read
Last month, the npm engineering team in a white paper shared why they chose Rust to rewrite their authorization service. If you are not already aware, npm is the largest package manager that offers both an open source and enterprise registry. The npm registry boasts of about 1.3 billion package downloads per day. Looking at the huge user base, it is not a surprise that the npm engineering team has to regularly keep a check on any area that causes performance problems. Though most of the network-bound operations were pretty efficient, while looking at the authorization service, the team saw a CPU-bound task that was causing a performance bottleneck. They decided to rewrite its “legacy JavaScript implementation” in Rust to make it modern and performant. Why the npm team chose Rust? C, C++, and Java were rejected by the team as C++ or C requires expertise in memory management and Java requires the deployment of JVM and associated libraries. They were then left with two options as the alternate programming languages: Go and Rust. To narrow down on one programming language that was best suited for their authorization service, the team rewrote the service in Node.js, Go, and Rust. The Node.js rewrite was acting as a baseline to evaluate Go or Rust. While rewriting in Node.js took just an hour, given the team’s expertise in JavaScript, the performance was very similar to the legacy implementation. The team finished the Go rewrite in two days but ruled it out because it did not provide a good dependency management solution. “The prospect of installing dependencies globally and sharing versions across any Go project (the standard in Go at the time they performed this evaluation) was unappealing,” says the white paper. Though the Rust rewrite took the team about a week, they were very impressed by the dependency management Rust offers. The team noted that Rust’s strategy is very much inspired by npm’s strategy. For instance, its Cargo command-line tool is similar to the npm command-line tool. All in all, the team chose Rust because not only it matched their JavaScript-inspired expectations, it also gave better developer experience. The deployment process of the new service was also pretty straightforward, and even after deployment, the team rarely encountered any operational issues. The team also states that one of the main reasons for choosing Rust was its helpful community. “When the engineers encountered problems, the Rust community was helpful and friendly in answering questions. This enabled the team to reimplement the service and deploy the Rust version to production.” What were the downsides of choosing Rust? The team did find the language a little bit difficult to grasp at first. The team shared in the white paper, “The design of the language front-loads decisions about memory usage to ensure memory safety in a different way than other common programming languages.” Rewriting the service in Rust came with an extra burden of maintaining two separate solutions for monitoring, logging, and alerting for the existing JavaScript stack and the new Rust stack. Given that it is quite a new language, Rust currently also lacks industry-standard libraries and best practices for these solutions. Read the white paper shared by npm for more details. Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web
Read more
  • 0
  • 0
  • 13904

article-image-zimperium-zlabs-discloses-a-new-critical-vulnerability-in-multiple-high-privileged-android-services-to-google
Natasha Mathur
02 Nov 2018
5 min read
Save for later

Zimperium zLabs discloses a new critical vulnerability in multiple high-privileged Android services to Google

Natasha Mathur
02 Nov 2018
5 min read
Tamir Zahavi-Brunner, Security Researcher at Zimperium zLabs posted the technical details of the vulnerability affecting multiple high-privileged Android devices and its exploit, earlier this week. Brunner had disclosed this vulnerability to Google who then designated it as CVE-2018-9411. As per Brunner, Google claims Project Treble ( introduced as part of Android 8.0 Oreo and that makes updates faster and easier for OEMs to roll out to devices) benefits Android security. However, as per the vulnerability disclosed by Brunner, elements of Project Treble could hamper Android security. “This vulnerability is in a library introduced specifically as part of Project Treble and does not exist in a previous library which does pretty much the same thing. This time, the vulnerability is in a commonly used library, so it affects many high-privileged services”, says Brunner. One of the massive changes that come with Project Treble is the split of many system services. Previously, these system services contained both AOSP (Android Open Source Project) and vendor code. After Project Treble, all of these services were split into one AOSP service and one or more vendor services called HAL services.  This means that data which used to be previously passed in the same process between AOSP and vendor now will have to pass through IPC (enables communication between different Android components) between AOSP and HAL services. Now, most of the IPC in Android goes through Binder (enables a remote procedure calls mechanism between the client and server processes), so Google decided that the new IPC should do so as well. But Google also decided to perform some modifications. They introduced HIDL which is a whole new format for the data passed through Binder IPC (makes use of shared memory to maintain simplicity and good performance). HIDL is supported by a new set of libraries and is dedicated to the new Binder domain for IPC between AOSP and HAL services. HIDL comes with its own new implementation for many types of objects. An important object for sharing memory in HIDL is hidl_memory. Technical details of the Vulnerability The hidl_memory comprises members namely, mHandle (HIDL object which holds file descriptors, mSize (size of the memory to be shared), mName (represents the type of memory). These structures are transferred through Binder in HIDL, where complex objects (like hidl_handle or hidl_string) have their own custom code for writing and reading the data. Transferring structures via 64-bit processes cause no issues, however, this size gets truncated to 32 bit in 32-bit processes, so only the lower 32 bits are used. So if a 32-bit process receives a hidl_memory whose size is bigger than UINT32_MAX (0xFFFFFFFF), the actually mapped memory region will be much smaller. “For instance, for a hidl_memory with a size of 0x100001000, the size of the memory region will only be 0x1000. In this scenario, if the 32-bit process performs bounds checks based on the hidl_memory size, they will hopelessly fail, as they will falsely indicate that the memory region spans over more than the entire memory space. This is the vulnerability!” writes Brunner. After the vulnerability has been tracked, it is time to find a target for the vulnerability. To find the target, an eligible HAL service is needed such as android.hardware.cas, or MediaCasService. MediaCasService allows the apps to decrypt the encrypted data. Exploiting the Vulnerability To exploit the vulnerability, there are two other issues that need to be solved such as finding the address of the shared memory and of other interesting data and making sure that the shared memory gets mapped in the same location each time. The second issue gets solved by looking at the memory maps of the linker in the service memory space. To solve the first issue, the data in the linker_alloc straight after the gap is analyzed, and a shared memory is mapped before a blocked thread stack, which makes it easy to reach the memory relatively through the vulnerability. Hence, instead of only getting one thread to that blocked state, multiple (5) threads are generated, which in turn, causes more threads to be created, and more thread stacks to get allocated. Once this shared memory gets mapped before the blocked thread stack, the vulnerability is used to read two things from the thread stack, the thread stack address, and the address where libc is mapped at to build a ROP chain. The last step is executing this ROP chain. However, Brunner states that the SELinux limitations on this process prevent turning this ROP chain into full arbitrary code execution. “There is no execmem permission, so anonymous memory cannot be mapped as executable, and we have no control over file types which can be mapped as executable”. Now, as the main objective is to obtain the QSEOS version, a code using ROP chain does that. This makes sure that the thread does not crash immediately after running the ROP chain. Then this process is left in a bit of an unstable state. To leave everything in a clean state, service using the vulnerability is crashed (by writing to an unmapped address) in order to let it restart. For complete information, read the official Zimperium blog post. FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack A kernel vulnerability in Apple devices gives access to remote code execution
Read more
  • 0
  • 0
  • 13901

article-image-android-studio-3-2-beta-5-out-with-updated-protobuf-gradle-plugin
Savia Lobo
31 Jul 2018
2 min read
Save for later

Android Studio 3.2 Beta 5 out, with updated Protobuf Gradle plugin

Savia Lobo
31 Jul 2018
2 min read
A good news for Android developers; Android Studio 3.2 Beta 5 is now available and ready for download. This is the latest beta channel preview version. Android Studio is Android’s official IDE, which is purpose-built for Android. It helps in accelerating application development and helps in building the highest-quality apps for every Android device. Read Also:  What is Android Studio and how does it differ from other IDEs? The last major release was Android 3.1.0 released in March 2018. The current stable version available is Android Studio 3.1.3. The latest Canary channel preview version is Android Studio 3.3 Canary 3. The Canary build allows one to experience latest features which are lightly tested as compared to the apps in the Beta build. What’s new in the Android Studio 3.2 Beta 5 update? Change in behaviour This version update now includes a minimum Protobuf Gradle plugin version of 0.8.6. This Gradle plugin compiles Protocol Buffer (aka. Protobuf) definition files (*.proto) in any project. Apart from this change, some bug fixes in this Beta 5 version include: Translations Editor rows are now aligned properly after scrolling Running another project using GradleBuild was causing java.lang.IllegalMonitorStateException: attempt to unlock read lock, not locked by current thread. This bug has now been fixed. Layout styles which were improperly requiring API level 17 instead of 15 are now fixed. Android Studio that was not properly navigating to certain styles with error message: "Cannot find declaration to go to" has been fixed. Linter is now properly resolving some API level values. Read more this update on Android Studio Developer page 9 Most Important features in Android Studio 3.2 The Art of Android Development Using Android Studio Build your first Android app with Kotlin
Read more
  • 0
  • 0
  • 13900
article-image-cstar-spotifys-cassandra-orchestration-tool-is-now-open-source
Melisha Dsouza
07 Sep 2018
4 min read
Save for later

cstar: Spotify’s Cassandra orchestration tool is now open source!

Melisha Dsouza
07 Sep 2018
4 min read
On the 4th of September 2018, Spotify labs announced that cstar- the Cassandra orchestration tool for the command line, will be made freely available to the public. In Cassandra, it is complicated to understand how to achieve the perfect performance, security, and data consistency. You need to run a specific set of shell commands on every node of a cluster, usually in some coordination to avoid the cluster being down. This task can be easy for small clusters, but can get tricky and time consuming for the big clusters. Imagine having to run those commands on all Cassandra nodes in the company! It would be time consuming and labor intensive. A scheduled upgrade of the entire Cassandra fleet at Spotify included a precise procedure that involved numerous steps. Since Spotify has clusters with hundreds of nodes, upgrading one node at a time is unrealistic. Upgrading all nodes at once also wasn't a probable option, since that would take down the whole cluster. In addition to the outlined performance problems, other complications while dealing with Cassandra involved: Temporary network failures, breaking SSH connections, among others Performance and availability can be affected if operations that are computation heavy or involve restarting the Cassandra process/node are not executed in a particular order Nodes can go down at any time, so the status of the cluster should be checked not just before running the task, but also before execution is started on a new node. This means there is no scope of parallelization. Spotify was in dire need of an efficient and robust method to counteract these performance issues on thousands of computers in a coordinated manner. Why was Ansible or Fabric not considered by Spotify? Ansible and Fabric are not topology-aware. They can be made to run commands in parallel on groups of machines. Some wrapper scripts and elbow grease, can help split a Cassandra cluster into multiple groups, and execute a script on all machines in one group in parallel. But on the downside, this solution doesn’t wait for Cassandra nodes to come back up before proceeding nor does it notice if random Cassandra nodes go down during execution. Enter cstar cstar  is based on paramiko-a Python (2.7, 3.4+) implementation of the SSHv2 protocol, and shares the same ssh/scp implementation that Fabric uses. Being a command line tool, it runs an arbitrary script on all hosts in a Cassandra cluster in “topology aware” fashion.     Example of cstar running on a 9 node cluster with replication factor of 3, with the assumption that the script brings down the Cassandra process. Notice how there are always 2 available replicas for each token range. Source: Spotify Labs cstar supports the following execution mechanisms: The script is run on exactly one node per data center at the time. If you have N data centers with M nodes each and replication factor of X, this effectively runs the script on M/X * N nodes at that time. The script run on all nodes at the same time, regardless of the topology. Installing cstar and running a command on a cluster is easy and can be done by following this quick example Source: Spotify Labs The concept of ‘Jobs’ Execution of a script on one or more clusters is a job. Job control in cstar works like in Unix shells. A user can pause running jobs and then resume them at a later point in time. It is also possible to configure cstar to pause a job after a certain number of nodes have completed. This helps users to: Run a cstar job on one node Manually validate if the job worked as expected Lastly, the user can resume the job. The features of Cstar has made it really easy for Spotify to work with Cassandra clusters. You can find more insights to this article on Spotify Labs. Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Baidu releases EZDL – a platform that lets you build AI and machine learning models without any coding knowledge PrimeTek releases PrimeReact 2.0.0 Beta 3 version
Read more
  • 0
  • 0
  • 13900

article-image-oracle-releases-graphpipe-standardizes-machine-learning-model-deployment
Bhagyashree R
16 Aug 2018
3 min read
Save for later

Oracle releases GraphPipe: An open source tool that standardizes machine learning model deployment

Bhagyashree R
16 Aug 2018
3 min read
Oracle has released GraphPipe, an open source tool to simplify and standardize deployment of Machine Learning (ML) models easier. Development of ML models is difficult, but deploying the model for the customers to use is equally difficult. There are constant improvements in the development model but, people often don’t think about deployment. This is were GraphPipe comes into the picture! What are the key challenges GraphPipe aims to solve? No standard way to serve APIs: The lack of standard for model serving APIs limits you to work with whatever the framework gives you. Generally, business application will have an auto generated application just to talk to your deployed model. The deployment situation becomes more difficult when you are using multiple frameworks. You’ll have to write custom code to create ensembles of models from multiple frameworks. Building model server is complicated: Out-of-the-box solutions for deployment are very few because deployment gets less attention than training. Existing solution not efficient enough: Many of the currently used solutions don't focus on performance, so for certain use cases they fall short. Here’s how the current situation looks like: Source: GraphPipe’s User Guide How GraphPipe solves these problems? GraphPipe uses flatbuffers as the message format for a predict request. Flatbuffers are like google protocol buffers, with an added benefit of avoiding a memory copy during the deserialization step. A request message provided by the flatbuffer definition includes: Input tensors Input names Output names The request message is then accepted by the GraphPipe remote model and returns one tensor per requested output name, along with metadata about the types and shapes of the inputs and outputs it supports. Here’s how the deployment situation will look like with the use of GraphPipe: Source: GraphPipe’s User Guide What are the features it comes with? Provides a minimalist machine learning transport specification based on flatbuffers, which is an efficient cross platform serialization library for C++, C#, C, Go, Java, JavaScript, Lobster, Lua, TypeScript, PHP, and Python. Comes with simplified implementations of clients and servers that make deploying and querying machine learning models from any framework considerably effortless. It's efficient servers can serve models built in TensorFlow, PyTorch, mxnet, CNTK, or Caffe2. Provides efficient client implementations in Go, Python, and Java. Includes guidelines for serving models consistently according to the flatbuffer definitions. You can read plenty of documentation and examples at https://oracle.github.io/graphpipe. The GraphPipe flatbuffer spec can be found on Oracle's GitHub along with servers that implement the spec for Python and Go. Oracle reveals issues in Object Serialization. Plans to drop it from core Java. What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Why Oracle is losing the Database Race
Read more
  • 0
  • 0
  • 13899

article-image-go-2-design-drafts-include-plans-for-better-error-handling-and-generics
Prasad Ramesh
29 Aug 2018
3 min read
Save for later

Go 2 design drafts include plans for better error handling and generics

Prasad Ramesh
29 Aug 2018
3 min read
In the annual Go user survey, the three top requests made by users for Go version 2 were better package management, error handling and the inclusion of generics. Following these requests, the Go 2 draft designs were shared yesterday to include error handling, error values, and adding generics. Note that these are not official proposals. The features, error handling and generics are in step 2 according to the Go release cycle, shown as follows. Source: Go Blog Yesterday, Google developer Russ Cox, gave a talk on design drafts for Golang 2. Go 2 draft designs were also previewed at Gophercon 2018. In his talk, he mentions that the current boilerplate contains too much code for error checks and that the error reporting is not precise enough. For example, an error while using os.Open in which the name of the file which cannot be opened, isn’t mentioned. As proper error reporting only adds to the code, most programmers don’t really bother with this despite knowing that such a practice may create confusion. The new idea, therefore, aims to add a check expression to shorten the checks while keeping them explicit. Cox also stresses on adding experience reports. These reports are difficult but necessary to implement new features. Experience reports turn abstract problems into concrete ones and are needed for changes to be implemented in Golang. They serve as a test case for evaluating a proposed solution and its effects on real-life use-cases. Regarding the inclusion of Generics, Cox mentions: “I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. As a result, I can't answer a design question like whether to support generic methods, which is to say methods that are parameterized separately from the receiver. If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.” Go 2 is not going to be a single release, but a sequence of releases adding features as and when they are ready. The approach is to first make features backward compatible to Go 1. Minor changes could be seen in Go 1 in a year or so. If there are no backward incompatible changes, Go 1.20 may be just declared as Go 2. The conversation for Go 2 has started, and there is a call for community help and contribution to converting the drafts into official proposals. Visit the Go page and the GitHub repository for more details. Why Golang is the fastest growing language on GitHub Golang 1.11 is here with modules and experimental WebAssembly port among other updates GoMobile: GoLang’s Foray into the Mobile World
Read more
  • 0
  • 0
  • 13896
article-image-mozilla-releases-firefox-67-0-3-and-firefox-esr-60-7-1-to-fix-a-zero-day-vulnerability-being-abused-in-the-wild
Bhagyashree R
19 Jun 2019
2 min read
Save for later

Mozilla releases Firefox 67.0.3 and Firefox ESR 60.7.1 to fix a zero-day vulnerability, being abused in the wild

Bhagyashree R
19 Jun 2019
2 min read
Yesterday, Mozilla released Firefox 67.0.3 and Firefox ESR 60.7.1 to fix an actively exploited vulnerability that can enable attackers to remotely execute arbitrary code on devices using vulnerable versions. So, if you are a Firefox user, it is recommended that you update it right now. This critical zero-day flaw was reported by Samuel Groß, a security researcher with Google Project Zero security team and the Coinbase Security team. It is a type confusion vulnerability tracked as CVE-2019-11707 that occurs “when manipulating JavaScript objects due to issues in Array.pop. This can allow for an exploitable crash. We are aware of targeted attacks in the wild abusing this flaw.” Not much information has been disclosed about the vulnerability yet, apart from this short description on the advisory page. In general, we can say that type confusion happens when a piece of code fails to verify the object type that is passed to it and blindly uses it without type-checking. The US Cybersecurity and Infrastructure Security Agency (CISA) also issued an alert informing users and administrators to update Firefox as soon as possible: “The Cybersecurity and Infrastructure Security Agency (CISA) encourages users and administrators to review the Mozilla Security Advisory for Firefox 67.0.3 and Firefox ESR 60.7.1 and apply the necessary updates.” Users can install the patched Firefox versions by downloading them from Mozilla’s official website. Or, they can click on the hamburger icon on the upper-right hand corner, type Update into the search box and hit the Restart to update Firefox button to be sure. This is not the first time when a zero-day vulnerability has been found in Firefox. Back in 2016, a vulnerability was reported in Firefox that was exploited by attackers to de-anonymize Tor Browser users. The attackers then collected the user data that included their IP addresses, MAC addresses, and hostnames. Mozilla then released an emergency fix in Firefox 50.0.2 and 45.5.1 ESR. Firefox releases v66.0.4 and 60.6.2 to fix the expired certificate problem that ended up disabling add-ons Firefox 67 enables AV1 video decoder ‘dav1d’, by default on all desktop platforms Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features
Read more
  • 0
  • 0
  • 13896

article-image-apple-convincingly-lobbied-against-right-to-repair-bill-in-california-citing-consumer-safety-concern
Amrata Joshi
03 May 2019
3 min read
Save for later

Apple convincingly lobbied against ‘right to repair’ bill in California citing consumer safety concern

Amrata Joshi
03 May 2019
3 min read
Apple is known for designing its products in a way that except for Apple experts none can easily repair them in case of any issues. For this, it seems the company is trying hard to kill the ‘Right To Repair’ bill in California which might work against Apple. The ‘Right To Repair’ bill which has been adopted by 18 states, is currently under discussion in California. According to this bill,  consumers will get the right to fix or mod their devices without any effect on their warranty. The company has managed to lobby California lawmakers and pushed the bill till 2020. https://twitter.com/kaykayclapp/status/1123339532068253696 In a recent report by Motherboard, an Apple representative and a lobbyist has been privately meeting with legislators in California to encourage them to go off the bill. The company is doing so by stoking fears of battery explosions for the consumers who attempt to repair their iPhones. The Apple representative argued that the consumers might hurt themselves if they accidentally end up puncturing the flammable lithium-ion batteries in their phones. In a statement to The Verge, California Assemblymember Susan Talamantes Eggman, who first introduced the bill in March 2018 and again in March 2019, said, “While this was not an easy decision, it became clear that the bill would not have the support it needed today, and manufacturers had sown enough doubt with vague and unbacked claims of privacy and security concerns.” Last quarter, Apple’s iPhone sales slowed down so the company anticipates that consumers may buy new handsets instead of getting the old one repaired. But the fact that the batteries might get punctured might bother many and will surely have enough speculations around it. Kyle Wiens, iFixit co-founder laughs at the fact about getting an iPhone battery punctured during a repair. Though he admits the possibility but according to him, it rarely happens. Wiens says, “Millions of people have done iPhone repairs using iFixit guides, and people overwhelmingly repair these phones successfully. The only people I’ve seen hurt themselves with an iPhone are those with a cracked screen, cutting their finger.” He further added, “Whether it uses gasoline or a lithium-ion battery, most every car has a flammable liquid inside. You can also get badly hurt if you’re changing a tire and your car rolls off the jack.” But a recent example from David Pierce, WSJ tech reviewer, justifies the explosion. https://twitter.com/pierce/status/1113242195497091072 With so much talk around repairing and replacing, it’s difficult to predict if the ‘Right to Repair’ bill with respect to iPhones, will come in force anytime soon. Only in 2020 we will get a clearer picture of the bill. Also, we will come to know if consumer safety is at stake or is it related to the company benefits. Apple plans to make notarization a default requirement in all future macOS updates Ian Goodfellow quits Google and joins Apple as a director of machine learning Apple officially cancels AirPower; says it couldn’t meet hardware’s ‘high standards’
Read more
  • 0
  • 0
  • 13894
Modal Close icon
Modal Close icon