Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-racket-7-3-releases-with-improved-racket-on-chez-refactored-io-system-and-more
Bhagyashree R
17 May 2019
2 min read
Save for later

Racket 7.3 releases with improved Racket-on-Chez, refactored IO system, and more

Bhagyashree R
17 May 2019
2 min read
Earlier this week, the team behind Racket announced the release of Racket 7.3. This release comes with improved Racket-on-Chez, refactored IO system, a new shear function in the Pict library, and more. The Racket programming language is general-purpose, multi-paradigm, and is a dialect of Lisp and Scheme. Updates in Racket 7.3 Snapshot builds of Racket-on-Chez are now available Racket’s core was largely implemented in C, which affects its portability to different systems, maintenance, and performance. Hence, back in 2017, the team decided to make the Racket distribution run on Chez Scheme. With the last release (Racket 7.2), the team shared that the implementation of Racket on Chez Scheme (Racket CS) has reached almost complete status with all functionalities in place. With this release, the team has added more improvements to Racket-on-Chez and has made its snapshot builds available on Racket Snapshots. The team further shared that by next release we can expect Racket-on-Chez to be included as a download option. Other updates In addition to the improvements in Racket-on-Chez, the following updates are introduced: Racket’s IO system is refactored to provide better performance and a simplified internal design. The JSON reader is now dramatically faster. The Racket web library now comes with improved support for 307 redirects. The Plot library comes with color map support for renderers. The Plot library allows you to produce any kind of plot including scatter plots, line plots, contour plots, histograms, and 3D surfaces and isosurfaces. A ‘shear’ function is added to the Pict library, Racket’s one of the standard functional picture libraries. Read the full announcement on Racket’s official website. Racket 7.2, a descendant of Scheme and Lisp, is now out! Racket v7.0 is out with overhauled internals, updates to DrRacket, TypedRacket among others Swift is improving the UI of its generics model with the “reverse generics” system
Read more
  • 0
  • 0
  • 13783

article-image-2019-deloitte-tech-trends-predictions-ai-fueled-firms-noops-devsecops-intelligent-interfaces-and-more
Natasha Mathur
29 Jan 2019
6 min read
Save for later

2019 Deloitte tech trends predictions: AI-fueled firms, NoOps, DevSecOps, intelligent interfaces, and more

Natasha Mathur
29 Jan 2019
6 min read
Deloitte launched their tenth annual “Tech Trends 2019: Beyond the digital frontier” report earlier this month. The report covers predictions related to Artificial intelligence, digital future, cloud, networking, and cybersecurity. Let’s have a look at these key predictions made in the report. More companies to transform into AI-fueled organizations Deloitte 2019 report states that there’ll be an increase in the number of companies following the transformation to fully autonomous AI-fueled firms in the next 18 to 24 months, making AI a major part of their corporate strategy. AI, machine learning, and other cognitive technologies run at the center of business and IT operations in an AI-fueled firm to harness data driven-insights. As per the two consecutive Deloitte global surveys (2016–17 and 2018), cognitive technologies/AI were at the top in a list of emerging technologies in which CIOs plan to invest. AI ambitions of these CIOs is more about using AI to increase productivity, and strengthen regulatory compliance using automation. Companies to make the transition from traditional to serverless environments (NoOps) The report states many CIOs will be looking at creating a NoOps IT environment that is automated and abstracted from the underlying infrastructure. It requires small teams to manage it and will thereby allow the CIOs to invest larger human capacity in developing new capabilities that can improve the overall operational efficiency. In NoOps environments, traditional operations like the code deployment and patching schedules are internal responsibilities and are mainly automated. This shift from traditional to serverless computing will allow the cloud vendors to dynamically and automatically allocate the compute, storage, and memory depending on the request for a higher-order service. Traditional cloud service models required organizations to manually design and provision such allocations. Serverless computing offers limitless scalability, high availability, NoOps, along with zero idle time costs. More companies are expected to take advantage of advanced connectivity to configure and operate enterprise networks As per the Deloitte report, many companies will opt for advanced networking to drive the development of new products and to transform inefficient operating models. CIOs are going to be virtualizing parts of the connectivity stack with the help of network management techniques like Software-defined networking (SDN) and network function virtualization (NFV). SDN is primarily used in data centers but its use is now being extended for wide area networking to connect data centers. NFV replaces network functions such as routing, switching, encryption, firewalling, WAN acceleration, etc, and can scale horizontally or vertically on demand. The report states that enterprises will be able to better optimize or “spin up” the network capabilities on demand to fulfill the needs of a specific application or meet the end-user requirements. Growth in interfaces like computer vision, gesture control devices, etc, will transform how humans, machines, and data interact with each other The report states that although conversational technologies are currently dominating the intelligent interfaces arena, other new additions in interfaces such as computer vision, gesture control devices, embedded eye-tracking platforms, bioacoustic sensing, and emotion detection/recognition technology, are gaining ground. Intelligent interfaces help track customers’ offline habits, similar to how search engines and social media companies track their customers’ digital habits. These interfaces also help understand the customers at a personal, and more detailed level, making it possible to “micro-personalize” the products and services. We will see more of these new interfaces combined with leading-edge technologies in the future (such as machine learning, robotics, IoT, contextual awareness, advanced augmented reality, and virtual reality) to transform the way we engage with machines, data, and each other. CMOs and CIOs will partner up to elevate the human experience by moving beyond marketing The report states that channel-focused services such as websites, social and mobile platforms, content management tools, search engine optimization are slowly becoming a thing of the past.  Many organizations will be moving beyond marketing by adopting a new generation of Martech systems and approach to data gathering, decision-making (determining how and when to provide an experience), and delivery (consistent delivery of dynamic content across channels). This, in turn, helps companies create personalized and dynamic end-to-end experiences for users and builds deep emotional connections among users to products and brands. Also, CMOs are required to own the delivery of the entire customer experience, and they often find themselves almost playing the CIO’s traditional role. At the same time, CIOs are required to transform the legacy systems and come out with new infrastructure to support the next-generation data management and customer engagement systems. This is why CIOs and CMOs will collaborate more closely to deliver on their company’s new marketing strategies as well as on the established digital agendas. Organizations to embed DevSecOps to improve cybersecurity As per the report, many organizations have started to use a method called DevSecOps that includes embedding security culture, practices, and tools into different phases of their DevOps pipelines. DevSecOps help improves the security and maturity levels of a company’s DevOps pipeline. DevSecOps is not a security trend but it's a new approach that offers companies a different way of thinking about security. DevSecOps has multiple benefits. It helps the security architects, developers, and operators share their metrics aligning to security and put a focus on business priorities. Organizations embedding DevSecOps into their development pipelines can use operational insights and threat intelligence. It also helps with proactive monitoring that involves automated, and continuous testing to identify problems. The report recommends that DevSecOps should tie in your broader IT strategy, which should be further driven by your business strategy. “If you can be deliberate about sensing and evaluating emerging technologies..you can make the unknown knowable..creating the confidence and construct to embrace digital while setting the stage to move beyond the digital frontier”, reads the report. For more information, check out the official Deloitte’s 2019 tech trends. IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others We discuss the key trends for web and app developers in 2019 [Podcast] We discuss the key trends for web and app developers in 2019 [Podcast]
Read more
  • 0
  • 0
  • 13782

article-image-palantirs-software-was-used-to-separate-families-in-a-2017-operation-reveals-mijente
Savia Lobo
06 May 2019
4 min read
Save for later

Palantir’s software was used to separate families in a 2017 operation reveals Mijente

Savia Lobo
06 May 2019
4 min read
Documents released this week, reveals that the data mining firm, Palantir was responsible for 2017 operation that targeted and arrested family members of children crossing the border alone. The documents show a huge contrast to what Palantir said its software was doing. This discrepancy was first identified by Mijente, an advocacy organization that has closely tracked Palantir’s murky role in immigration enforcement. The documents confirm that “the role Palantir technology played in facilitating hundreds of arrests, only a small fraction of which led to criminal prosecutions”, The Intercept reports. Palantir, a software firm founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, develops software that helps agents analyze massive amounts of personal data and builds profiles for prosecution and arrest. Also, in May 2018, Amazon employees, in a letter to Jeff Bezos, protested against the sale of its facial recognition tech to Palantir where they “refuse to contribute to tools that violate human rights”, citing the mistreatment of refugees and immigrants by ICE. Read Also: Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first Palantir earlier said it was not involved with the part of ICE, which was strictly devoted to deportations and the enforcement of immigration laws. Whereas Palantir’s $38 million contract with Homeland Security Investigations, or HSI, a component of ICE had a far broader criminal enforcement mandate. https://twitter.com/ConMijente/status/1124056308943138834 The 2017 ICE operation was designed to dissuade children from joining family members in the United States by targeting parents and sponsors for arrest. According to The Intercept, “Documents obtained through Freedom of Information Act litigation and provided to The Intercept show that this claim, that Palantir software is strictly involved in criminal investigations as opposed to deportations, is false.” As part of the operation, ICE arrested 443 people solely for being undocumented. For all this, Palantir’s software was used throughout, which helped agents build profiles of immigrant children and their family members for the prosecution and arrest of any undocumented person they encountered in their investigation. https://twitter.com/ConMijente/status/1124056314106322944 “The operation was underway as the Trump administration detained hundreds of children shelters throughout the country. Unaccompanied children were taken by border agents, sent to privately-run facilities, and held indefinitely. Any undocumented parent or family member who came forward to claim children were arrested by ICE for deportation. More children were kept in detention longer, as relatives stopped coming forward”, Mijente reports. Mijente further mentions in their post, “Mijente is urging Palantir to drop its contract with ICE and stop providing software to agencies that aid in tracking, detaining, and deporting migrants, refugees, and asylum seekers. As Palantir plans its initial public offering, Mijente is also calling on investors not to invest in a company that played a key role in family separation.” The seven-page document, titled “Unaccompanied Alien Children Human Smuggling Disruption Initiative,” details how one of Palantir’s software solutions, Investigative Case Management (ICM) can be used by agents stationed at the border to build cases of unaccompanied children and their families.Mijente further mentions, “This document is further proof that Palantir’s software directly aids in prosecutions for deportation carried out by HSI agents. Not only are HSI agents involved in deportations in the interior, but they are also actively aiding border agents by investigating and prosecuting relatives of unaccompanied children hoping to join their families.” Jesse Franzblau, senior policy analyst for the National Immigrant Justice Center, said in an email to The Intercept, “The detention and deportation machine is not only driven by hate, but also by profit. Palantir profits from its contract with ICE to help the administration target parents and sponsors of children, and also pays Amazon to use its servers in the process. The role of private tech behind immigration enforcement deserves more attention, particularly with the growing influence of Silicon Valley in government policymaking. “Yet, Palantir’s executives have made no move to cancel their work with ICE. Its founder, Alex Karp, said he’s “proud” to work with the United States government. Last year, he reportedly ignored employees who “begged” him to end the firm’s contract with ICE”, the Mijente report mentions. To know more about this news in detail head over to the official report. Lerna relicenses to ban major tech giants like Amazon, Microsoft, Palantir from using its software as a protest against ICE Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license “We can sell dangerous surveillance systems to police or we can stand up for what’s right. We can’t do both,” says a protesting Amazon employee
Read more
  • 0
  • 0
  • 13779

article-image-nginx-hybrid-application-delivery-controller-platform-improves-api-management-manages-microservices-and-much-more
Melisha Dsouza
15 Oct 2018
3 min read
Save for later

NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more!

Melisha Dsouza
15 Oct 2018
3 min read
“Technology is front and center in every business strategy, and enterprises of all sizes and in all industries must embrace digital to attract, retain, and enrich customers,” -Gus Robertson, CEO, NGINX At the NGINX Conf 2018, the NGINX team has announced enhancements to its Application Platform that will serve as a common framework across monolithic and microservices based applications. The upgrade comes with 3 new releases; NGINX Plus, NGINX Controller, and NGINX Unit, which have been engineered to provide a built-in service mesh for managing microservices and an integrated application programming interface (API) management platform. They also maintain the traditional load balancing capabilities and a web application firewall (WAF). An application delivery controller (ADC) is used to improve the performance of web applications. The ADC acts as a mediator between web and application servers and their clients. It transfers requests and responses between them while enhancing performance using processes like load balancing, caching, compression, and offloading of SSL processing. The main aim of re-architecting NGINX’s platform and launching new updates was to provide a more comprehensive approach to integrating load balancing, service mesh technologies, and API management. This was to be done leveraging the modular architecture of the NGINX controller. Here is a gist of the three new NGINX product releases: #1 NGINX Controller 2.0 This controller is an upgrade on the NGINX Controller 1.0 that was launched in June of 2018. It was introduced with centralized management, monitoring, and analytics for NGINX Plus load balancers. Now, NGINX Controller 2.0 brings advanced NGINX Plus configuration. This includes version control, diffing, reverting and many more features. It also includes an all-newAPI Management Module which manages the NGINX Plus as an API gateway. Besides this, the controller will also include a future Service Mesh Module. #2 NGINX Plus R16 The R16 comes with dynamic clustering. It has a clustered state sharing and key-value stores for global rate limiting and DDoS mitigation. It also comes with load balancing algorithms for Kubernetes and microservices, enhanced UDP for VoIP and VDI, and AWS PrivateLink integration. #3 NGINX Unit 1.4 This unit improves security and language support while providing support for TLS. It also adds JavaScript with Node.js to extend existing Go, Perl, PHP, Python, and Ruby language support. Enterprises can now use the NGINX Application Platform to function as a Dynamic Application Gateway and a Dynamic Application Infrastructure. NGINX Plus and NGINX are used by popular, high-traffic sites such as Dropbox, Netflix, and Zynga. More than 319 million websites worldwide rely on NGINX Plus and NGINX application delivery platforms. To know more about this announcement, head over to DevOps.com Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0 Getting started with F# for .Net Core application development [Tutorial]  
Read more
  • 0
  • 0
  • 13777

article-image-googlers-for-ending-forced-arbitration-a-public-awareness-social-media-campaign-for-tech-workers-launches-today
Natasha Mathur
15 Jan 2019
4 min read
Save for later

Googlers for ending forced arbitration: a public awareness social media campaign for tech workers launches today

Natasha Mathur
15 Jan 2019
4 min read
There seems to be a running battle between Google and its employees for quite some time now. A group of Google employees announced yesterday that they’re launching a public awareness social media campaign from 9 AM to 6 PM EST today. The group, called, ‘Googlers for ending forced arbitration’ aims to educate people about the forced arbitration policy via Instagram and Twitter where they will also share their experiences about the same with the world.   https://twitter.com/endforcedarb/status/1084813222505410560 The group has researched their fellow tech employees, academic institutions, labour attorneys, advocacy groups, etc as well as the contracts of around 30 major tech companies, as a part of its efforts. They also published a post on Medium, yesterday, stating that “ending forced arbitration is the gateway change needed to transparently address inequity in the workplace”. According to National Association of Consumer Advocates, “In forced arbitration, a company requires a consumer or employee to submit any dispute that may arise to binding arbitration as a condition of employment or buying a product or service. The employee or consumer is required to waive their right to sue, to participate in a class action lawsuit, or to appeal”. https://twitter.com/ODwyerLaw/status/1084893776429178881 Demands for more transparency around Google’s sexual assault policies seems to have become a bone of contention for Google. For instance, two shareholders, namely, James Martin and two other pension funds sued Alphabet’s board members, last week, for protecting the top execs accused of sexual harassment. The lawsuit, which seeks major changes to Google’s corporate governance, also urges for more clarity surrounding Google’s policies. Similarly, Liz Fong Jones, developer advocate at Google Cloud platform, revealed earlier this month, that she’s planning to leave the firm due to Google showing lack of leadership in case of the demands made by employees during the Google walkout. It was back in November 2018 when over 20,000 Google employees organized Google “walkout for real change” and walked out of their offices along with temps and contractors to protest against the discrimination, racism, and sexual harassment encountered within Google. Google employees had made five demands as part of the walkout, including ending forced arbitration for all employees (including temps) in cases of sexual harassment and other forms of discrimination. Now, although Google announced that it's ending its forced Arbitration policy as a response to the walkout (a move that was soon followed by Facebook) back in November, Google employees are not convinced. They argue that the announcement only made up for strong headlines, and did not actually do enough for the employees. The employees mentioned that there were “no meaningful gains for worker equity … nor any actual change in employee contracts or future offer letters (as of this publication, we have confirmed Google is still sending out offer letters with the old arbitration policy)”. Moreover, forced arbitration still exists in Google for cases involving other forms of workplace harassment and discrimination issues that are non-sexual in nature. Google has made the forced arbitration policy optional only for individual cases of sexual assault for full-time employees and still exists for class-action lawsuits and thousands of contractors who work for the company. Additionally, the employee contracts in the US still have the arbitration waiver in effect. “Our leadership team responded to our five original demands with a handful of partial policy changes. The other ‘changes’ they announced simply re-stated our current, ineffective practices or introduced extraneous measures that are irrelevant to bringing equity to the workplace”, mentions the group in a blog post on Medium. Follow the public awareness campaign on the group’s Instagram and Twitter accounts. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”
Read more
  • 0
  • 0
  • 13774

article-image-equifax-breach-victims-may-not-even-get-the-promised-125-ftc-urges-them-to-opt-for-10-year-free-credit-monitoring-services
Savia Lobo
01 Aug 2019
5 min read
Save for later

Equifax breach victims may not even get the promised $125; FTC urges them to opt for 10-year free credit monitoring services

Savia Lobo
01 Aug 2019
5 min read
When Equifax announced up to $425 million global settlement with the FTC and that users affected by its data breach in 2017 can file a claim, the public response to this settlement was overwhelming. FTC says, “millions of people visited ftc.gov/Equifax and gone on to the settlement website’s claims form”. The settlement announced last month included other benefits the consumers can claim free credit monitoring services or, alternatively, request cash payment if they already have credit monitoring. Yesterday, the FTC released a statement requesting consumers to choose 10 years’ free credit card monitoring services instead. Only those who certify that they already have credit monitoring are recommended to claim up to $125. The FTC further explains this is because “the pot of money that pays for that part of the settlement is $31 million. A large number of claims for cash instead of credit monitoring means only one thing: each person who takes the money option will wind up only getting a small amount of money. Nowhere near the $125 they could have gotten if there hadn’t been such an enormous number of claims filed.” FTC suggest customers to opt for the 10-year free monitoring services as, “the market value would be hundreds of dollars a year”.  “it monitors your credit report at all three nationwide credit reporting agencies, and it comes with up to $1 million in identity theft insurance and individualized identity restoration services”, the FTC further adds. https://twitter.com/LauraSullivaNPR/status/1156617951245721601 The FTC is now attempting to influence users into believing why a 10-year free credit card monitoring by a company that is lax with its security measures is a better bet than claiming the low risk yet paltry sum of $125. This when users seek to discontinue their services with the company, makes one question who the FTC is protecting - the people, victims of the data breach or Equifax, whose irresponsible data and security practices have exposed millions to risk. https://twitter.com/ScottFeldman/status/1156639735063990272 FTC  says there is still money available; however, it’s to “reimburse people for what they paid out of their pocket to recover from the breach. Say you had to pay for your own credit freezes after the breach, or you hired someone to help you deal with identity theft. The settlement has a larger pool of money for just those people. If you’re one of them, use your documents to submit your claim.” CNBC reports, “Equifax could not immediately be reached for comment.” Many consumers are highly infuriated over this revised decision and also surprised that FTC has fined just $31m for compromising millions of user data. Andy Baio, a former CTO of Kickstarter, tweeted, “If any more than 248,000 people request cash settlements instead of credit monitoring, the payout starts shrinking. If a million people ask for cash, for example, the settlement goes down to $31.” https://twitter.com/waxpancake/status/1154877051574214656 A user on Reddit questions how Equifax is “only being fined $31 million for exposing sensitive data of half the nations population? That’s less than $0.19 per person whose data was hacked”. Another user on HackerNews writes, “It seems absurd that they only need to allocate $31 million for "alternative payments" while the old CEO leaves with close to $20 million in bonuses, while the rest of the money in the settlement is basically reserved for them to pay themselves for their "free" credit monitoring.” He further adds, “This whole situation was a good opportunity to set a precedent for companies not taking data security seriously. But they've instead shown everyone that you can really just ignore all of that and hope it's never discovered - even if it is, it's really just a light slap on the wrist. Combining this with the recent Facebook fine, it really makes me think that the FTC has become a complete joke.” Another furious user wrote on HackerNews, “$31 million is a laughably small amount of money to set aside for direct settlements in the biggest hack in all of history. Add three zeroes to that, probably still not enough.” “I spent three days figuring out this nightmarish credit reporting system and helping friends and family place freezes, as well as educating them to avoid all the horrible dark patterns on Equifax's site. What I want is about $2000 and the ability to opt-out of them owning and reselling my personal data completely. I don't need credit monitoring, I don't need credit period anymore, why am I forced into accepting the unlimited risk of them owning all my data so that this private company can keep operating?”, the user further added. https://twitter.com/ryanlcooper/status/1156638207032692737 To know more about this news in detail, head over to FTC’s official statement. Stefan Judis, a Twilio web developer, on responsible web development with HTTP headers Ex-Amazon employee hacks Capital One’s firewall to access its Amazon S3 database; 100m US and 60m Canadian users affected Equifax data breach could have been “entirely preventable”, says House oversight and government reform committee staff report
Read more
  • 0
  • 0
  • 13773
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-releases-windows-10-insider-build-17682
Natasha Mathur
01 Jun 2018
3 min read
Save for later

Microsoft releases Windows 10 Insider build 17682!

Natasha Mathur
01 Jun 2018
3 min read
Microsoft announced today that they are releasing Windows 10 Insider build 17682 from the RS5 branch today. The new release includes sets improvements, wireless projection experience, Microsoft Edge improvements, and RSAT along with other updates and fixes. Major improvements and updates Sets Improvements New tab page has been updated which makes it easy to launch apps. On clicking the plus button in a Sets window, apps are visible in the frequent destinations list. The all apps list have been integrated into the new tab page to make it easy to browse apps instead of using the search box. Apps supporting Sets when clicked will launch into a new tab. In case you select News Feed, just select the “Apps” link which is next to “News Feed”, this will help switch to the all apps list. Managing Wireless Projection Experience Earlier, there were disturbances during wireless projection for users when the session was started through file explorer or an app. This has been fixed now with Windows 10 Insider build 17682 as there’ll be a control banner at the top of a screen during a session. The control banner informs you about your connection state, lets you tune the connection as well as helps with quick disconnect or reconnect to the same sink. Tuning is done with the help of settings gear. Screen to screen latency is optimized based on the following scenarios: Game mode makes gaming over a wireless connection possible by minimizing screen to screen latency. Video mode ensures smooth playback of the videos without any glitches on the big screen by increasing screen to screen latency. Productivity mode helps to balance between game mode and video mode. Screen to screen latency is responsive enough so that typing feels natural while ensuring limited glitch in the videos. All connections start off in the productivity mode. Improvements in Microsoft Edge for developers With the latest Windows 10 insider build 17682, there is unprefixed support for the new Web Authentication API (WebAuthN). Web Authentication helps provide a scalable and interoperable solution. It helps with replacing passwords with stronger hardware-bound credentials. Microsoft Edge users can use Windows Hello (via PIN or biometrics). They can also use other external authenticators namely FIDO2 Security Keys or FIDO U2F Security Keys. This helps authenticate the websites securely. RSAT available on demand No need to manually download RSAT on every upgrade. Select the “Manage Optional features” in Settings. Then click on “Add a feature” option which will provide you with all the listed RSAT components. You can pick the components you want, and on next upgrade, Windows will ensure that all those components automatically persist the upgrade. More information about other known issues and improvements is on the Window’s Blog. Microsoft Cloud Services get GDPR Enhancements Microsoft releases Windows 10 SDK Preview Build 17115 with Machine Learning APIs Microsoft introduces SharePoint Spaces, adds virtual reality support to SharePoint
Read more
  • 0
  • 0
  • 13773

article-image-nim-1-0-releases-with-improved-library-backward-compatibility-and-more
Amrata Joshi
24 Sep 2019
2 min read
Save for later

Nim 1.0 releases with improved library, backward compatibility and more

Amrata Joshi
24 Sep 2019
2 min read
Yesterday, the team at Nim announced Nim version 1.0, a general-purpose, compiled programming language that focuses on efficiency, readability and flexibility. Major changes in Nim 1.0 Backwards compatibility The switch -d:nimBinaryStdFiles has been removed in this release and stdin/stdout/stderr are now the binary files again.  In this release, the language definition and compiler are now stricter about gensym'ed symbols in hygienic templates.  Changes made to library The team has removed unicode.Rune16 in this release as the name ‘Rune16 ’ was wrong. In Nim 1.0, encodings.getCurrentEncoding distinguishes between the OS's encoding and console's encoding.  In this release, json.parseJsonFragments iterator can speedup JSON processing. Oid usage has been enabled in hashtables. std/monotimes module has been added that implements monotonic timestamps. Compiler In Nim 1.0, the Nim compiler warns about unused module imports. Users can use a top-level {.used.} pragma in the module that can be importable without giving a warning. In this version, the Nim compiler nomore recompiles the Nim project via nim c -r if case no dependent Nim file is changed. Users seem to be excited about this news and are appreciating the efforts taken by the team. A user commented on HackerNews, “Great! I love this language, so simple and powerful, so fast executables!”  Another user commented, “I would have never thought to live long enough to see this happening! I started using Nim in 2014, but abandoned it after a few years, frustrated by the instability of the language and what I perceived as a lack of vision. (In 2014, release 1.0 was said to be "behind the corner".) This release makes me eager to try it again. I remember that the language impressed me a lot: easy to learn, well-thought, and very fast to compile. Congratulations to the team!” Other interesting news in programming How Quarkus brings Java into the modern world of enterprise tech LLVM 9 releases with official RISC-V target support, asm goto, Clang 9, and more Twitter announces to test ‘Hide Replies’ feature in the US and Japan, after testing it in Canada
Read more
  • 0
  • 0
  • 13759

article-image-dav1d-0-1-0-the-av1-decoder-by-videolan-is-here
Prasad Ramesh
12 Dec 2018
2 min read
Save for later

dav1d 0.1.0, the AV1 decoder by VideoLAN, is here

Prasad Ramesh
12 Dec 2018
2 min read
Yesterday, Jean-Baptiste Kempf, VideoLAN president announced dav1d 0.1.0. dav1d is an AV1 decoder from VideoLAN, the same company that offers the popular VLC Media Player. dav1d was first presented in Video Developer Days 2018. The first usable version of dav1d, dav1d 0.1.0 is dubbed as Gazelle. In this release users can use the API, ship the decoder, and expect to receive some support from the developers. New features in dav1d 0.1.0 Since the initial launch of dav1d in September 2018 there has been a lot of work done on it: All AV1 features are now supported, even the ones that are less known 8, 10, 12 bits, and all chroma sub-samplings are supported by dav1d 0.1.0 All AV1 files shared to the developers are supported Developers invested a lot of time to make dav1d 0.1.0 quick, while keeping a maintainable binary size. More assembly for desktop is added. Some assembly for ARMv8, and for older machines (SSSE3) has been merged. In single-thread, on ARMv8, dav1d is now as fast as libaom. With more threads it is even faster. Some more SSSE3 code is being merged. So, dav1d will soon be faster than other decoders, on all platforms. There is also some work being done on shaders, potentially to bring the Film Grain feature. Some benchmarks of dav1d 0.1.0 Biggest advantage dav1d has is its high scalability. The performance gets much better as the number of threads goes up. Results from a 32-core AMD Epyc processor: Source: Medium As you can see. aomdec caps out at 8 threads while dav1d keeps on scaling with higher number of threads. Performance on smartphone processors: Source: Medium On multiple cores, 1080p 30fps can be decoded by most high-end chips released in the past two years. On an Apple A12X, 1440p at 60fps and 4K at 30fps is possible! For more benchmarks and complete comparisons, visit the Medium Post. Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg A new Video-to-Video Synthesis model uses Artificial Intelligence to create photorealistic videos Mozilla shares how AV1, the new open source royalty-free video codec, works
Read more
  • 0
  • 0
  • 13759

article-image-dagger-2-17-a-dependency-injection-framework-for-java-and-android-is-now-out
Bhagyashree R
09 Aug 2018
2 min read
Save for later

Dagger 2.17, a dependency injection framework for Java and Android, is now out!

Bhagyashree R
09 Aug 2018
2 min read
After the consecutive release of Dagger 2.15 and 2.16 in May earlier this year, Dagger 2.17 was released with enhanced performance and bug fixes. This dependency injection framework for Java and Android allows developers to focus on the interesting classes (the classes that actually do stuff!). You just need to declare the dependencies, specify how to satisfy them, and ship your app. What’s new in Dagger 2.17? Bug fixes and error improvements: Previously, when a @Binds method in a parent was used only from a child, whose dependency was missing in the parent but present in the child, it used to result in a valid graph. Dagger now reports an error in such cases. An error is reported for binding methods that have more than one scope annotation, instead of throwing an exception. If two entry point methods with different keys are inherited from different supertypes of a component type, Dagger reports an error. Dagger reports an error for scope annotations on @BindsOptionalOf methods. Apply scope to the non-optional binding that satisfies the optional binding, instead of the @BindsOptionalOf method. You should install AndroidInjectionModule or AndroidSupportInjectionModule when using dagger.android, otherwise Dagger 2.17 will throw a missing binding error. Bug fixed to report cycles if some components have no entry points that depend on the cycle. Bug fixed where scope annotations in error messages were missing annotation attributes. Additions and deprecations: An option is added to use string keys for dagger.android and allow the keys to be obfuscated. You can enable this mode with the -Adagger.android.experimentalUseStringKeys flag. experimentalAndroidMode is renamed to fastInit. dagger.android.DaggerFragment is deprecated, use dagger.android.support.DaggerFragment instead. This is done to match Android Pie’s deprecation of framework fragments. Checkout Dagger’s Github page for more on the 2.17 release. Introducing Android 9 Pie, filled with machine learning and baked-in UI features All new Android apps on Google Play must target API Level 26 (Android Oreo) or higher, to publish Android Studio 3.2 Beta 5 out, with updated Protobuf Gradle plugin
Read more
  • 0
  • 0
  • 13758
article-image-a-new-episodic-memory-based-curiosity-model-to-solve-procrastination-in-rl-agents-by-google-brain-deepmind-and-eth-zurich
Bhagyashree R
26 Oct 2018
5 min read
Save for later

A new episodic memory-based curiosity model to solve procrastination in RL agents by Google Brain, DeepMind and ETH Zurich

Bhagyashree R
26 Oct 2018
5 min read
The Google Brain team with DeepMind and ETH Zurich have introduced an episodic memory-based curiosity model which allows Reinforcement Learning (RL) agents to explore environments in an intelligent way. This model was the result of a study called Episodic Curiosity through Reachability, the findings of which Google AI shared yesterday. Why this episodic curiosity model is introduced? In real-world scenarios, the rewards required in reinforcement learning are sparse and most of the current reinforcement learning algorithms struggle with such sparsity. Wouldn’t it be better if the agent is capable of creating its own rewards? That’s what this model does. This makes the rewards denser and more suitable for learning. Many researchers have worked on some curiosity-driven learning approaches before, one of them is Intrinsic Curiosity Module (ICM). This method is explored in the recent paper Curiosity-driven Exploration by Self-supervised Prediction published by Ph.D. students at the University of California, Berkeley. ICM builds a predictive model of the dynamics of the world. The agent is rewarded when the model fails to make good predictions. Exploring unvisited locations is not directly a part of the ICM curiosity formulation. In the ICM method, visiting them is only a way to obtain more “surprise” and thus maximize overall rewards. As a result, in some environments there could be other ways to cause self-surprise, leading to unforeseen results. The authors of the ICM method along with researchers at OpenAI, in their research Large-Scale Study of Curiosity-Driven Learning, show a hidden danger of surprise maximization. Instead of doing something useful for the task at hand, agents can learn to indulge procrastination-like behavior. The episodic memory-based curiosity model overcomes this “procrastination” issues. What is episodic memory-based curiosity model? This model uses a deep neural network trained to measure how similar two experiences are. For training the model, the researchers made it guess whether two observations were experienced close together in time, or far apart in time. Temporal proximity is a good proxy for whether two experiences should be judged to be part of the same experience. This training gives a general concept of novelty via reachability, which is shown a follows: Source: Google AI How this model works Inspired by curious behavior in animals, this model rewards the agent with a bonus when it observes something novel. This bonus is summed up with the real task reward making it possible for the RL algorithm to learn from the combined reward. To calculate the bonus of the agent, the current observation is compared with the observation in memory. This comparison is done based on how many environment steps it takes to reach the current observation from those in memory. Source: Google AI This method follows these steps: The agent's observations of the environment are stored in an episodic memory. The agents are also rewarded for reaching observations that are not yet represented in memory. In this method, being “not in memory” is the definition of novelty in our method. Such a behavior of seeking the unfamiliar will lead the artificial agent to new locations, thus keeping it from wandering in circles and ultimately help it stumble on the goal. Experiment and results Different approaches to curiosity were tested in two visually rich 3D environments: ViZDoom and DMLab. The agent was given various tasks such as searching for a goal in a maze or collecting good objects and avoiding bad objects. The standard setting in previous formulations, such as ICM, on DMLab, was to provide the agent a laser-like science fiction gadget. If the agent does not need a gadget for a particular task, it was free not to use it. In this test, the surprise-based ICM method used this gadget a lot even when it is useless for the task at hand. The newly introduced method instead learns reasonable exploration behavior under the same conditions. This is because it does not try to predict the result of its actions, but rather seeks observations which are “harder” to achieve from those already in the episodic memory. In short, the agent implicitly pursues goals which require more effort to reach from memory than just a single tagging action. This approach penalizes an agent running in circles because after completing the first circle the agent does not encounter new observations other than those in memory, and thus receives no rewards. In the experimental environment, the model was able to achieve: In ViZDoom, the agent learned to successfully navigate to a distant goal at least two times faster than the state-of-the-art curiosity method ICM. In DMLab, the agent generalized well to new procedurally generated levels of the game. It was able to reach the goal at least two times more frequently than ICM on test mazes with very sparse reward. To know more in detail about the episodic memory-based curiosity model, check out Google AI’s post and also the paper: Episodic Curiosity Through Reachability. DeepMind open sources TRFL, a new library of reinforcement learning building blocks Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system Understanding Deep Reinforcement Learning by understanding the Markov Decision Process [Tutorial]
Read more
  • 0
  • 0
  • 13757

article-image-apple-introduces-macos-mojave-with-ux-enhancements-like-voice-memos-redesigned-app-store-apple-news-more-security-controls
Natasha Mathur
06 Jun 2018
4 min read
Save for later

macOS Mojave: Apple updates the Mac experience for 2018

Natasha Mathur
06 Jun 2018
4 min read
The new version of macOS called Mojave was announced at Apple’s ongoing annual developer conference, WWDC 2018. It includes a bunch of new features namely dark mode, revamped Mac app store, desktop stacks, security control, safari privacy in addition to other updates and features. The final release will be sometime in fall during September or October, with public beta releasing this summer. Let’s have a look at what’s new in the shiny new macOS version Mojave. Key macOS Mojave Features Dark mode Dark mode is added by Apple to macOS with the latest release. It has the ability to change the dock, taskbar, and chrome around apps into a dark gray color. It doesn’t come with a new functionality though, it’s mainly for aesthetics, just like all the other dark modes. There is also an API available for developers to implement Dark Mode in their apps. Mojave also presents a new Dynamic Desktop which is capable of automatically changing the desktop picture to match the time of day. Revamped Mac app store The Mac app store is finally revamped in Mojave. Taking inspiration from the iOS store that underwent a makeover last year, the redesigned Mac app store consists of new app collection along with a lot more editorial content. There are also going to be many apps from top developers coming to the Mac App Store namely Office from Microsoft and Lightroom CC from Adobe, among others. Apple News, Stocks, Home, Voice memos Apps such as News, Stocks, Voice Memos and Home are available on Mac for the first time. News app comes with articles, photos, and videos which will look great on the Mac display. The home app allows the Mac users to control their HomeKit-enabled accessories. It lets users perform tasks like turning lights off and on, adjusting thermostat settings. Voice Memos makes it easy for you to record personal notes, lectures, interviews, song ideas, etc. You can also access them from iPhone, iPad or Mac. Stocks provides curated market news along with a personalized watchlist which is complete with quotes and interactive charts. Desktop stacks A new feature called stacks cleans up a messy desktop by dedicating folders to specific file types. These folders automatically collect files that belong to them. This way there will be stacks of PDFs, images, movies, etc. Clicking the folders will bring the files to the desktop to make it easy for you to browse through them. Security controls With more pop-ups added by Apple in the new Mojave, you can now control what apps can access your information and hardware. With newly added security controls, you can now decide if you want an app to have access to your location, photos, contacts, microphone, etc. Safari privacy Apple started blocking websites that track you based on your system configuration in Safari. Now, it comes with an added ability which will help you block social networks like Facebook from tracking you across the web using “like” buttons. It also flags reused passwords so users can change them. Finder updates Finder has a new view called “gallery view” which helps scroll through small previews of files There is also going to be a way to view metadata inside a finder window. You can also perform quick actions on files such as rotating a photo or assembling multiple files into a PDF. Markup and screenshots Users can mark up documents and make changes inside of Quick Look which will help quickly deal with files. If you take a screenshot, you will be presented with a button to mark them up. To know more about macOS Mojave, check out the official blog post by Apple. Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others WWDC 2018 Preview: 5 Things to expect from Apple’s Developer Conference Apple steals AI chief from Google
Read more
  • 0
  • 0
  • 13748

article-image-apple-revoked-facebook-developer-certificates-due-to-misuse-of-apples-enterprise-developer-program-google-also-disabled-its-ios-research-app
Savia Lobo
31 Jan 2019
3 min read
Save for later

Apple revoked Facebook developer certificates due to misuse of Apple’s Enterprise Developer Program; Google also disabled its iOS research app

Savia Lobo
31 Jan 2019
3 min read
Facebook employees are experiencing turbulent times as Apple decides to revoke the social media giant’s developer certificates. This is due to a TechCrunch report that said Facebook paid 20$/month to users including teens to install the ‘Facebook research app” on their devices which allowed them to track their mobile and web browsing activities. Following the revoke, Facebook employees will not be able to access early versions of Facebook apps such as Instagram and Messenger, and many other activities such as food ordering, locating an area on the map and much more. Yesterday, Apple announced that they have shut down the Facebook research app for iOS. According to Apple, “We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple”. The company further said, “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” Per Mashable report, “Facebook employees argued that Apple's move was merely an attempt to distract from an embarrassing FaceTime bug that went public earlier in the week.” An employee commented, “Anything to take the heat off the FaceTime security breach.” Facebook also said that  it’s “working closely with Apple to reinstate our most critical internal apps immediately.” Mark Zuckerberg has also received a stern letter from Senator Mark Warner including a list of questions about the company’s data gathering practices, post the TechCrunch report went viral. In a statement, he mentioned, “It is inherently manipulative to offer teens money in exchange for their personal information when younger users don’t have a clear understanding of how much data they’re handing over and how sensitive it is.” Google disabled its iOS app too Similar to Facebook, Google too distributed a private app, Screenwise Meter, to monitor how people use their iPhones and rewarded the users with Google’s Opinion Rewards program gift cards in exchange for collecting information on their internet usage. However, yesterday, Google announced that it has disabled the iOS app. Google’s Screenwise Meter app has been a part of a program that’s been around since 2012. It first started tracking household web access through a Chrome extension and a special Google-provided tracking router. The app is open to anyone above 18 but allows users aged 13 and above to join the program if they’re in the same household. Facebook’s tracking app, on the other hand, targeted people between the ages of 13 and 25. A Google spokesperson told The Verge, “The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices. This app is completely voluntary and always has been. We’ve been upfront with users about the way we use their data in this app, we have no access to encrypted data in apps and on devices, and users can opt out of the program at any time.” To know more about this news, head over to The Verge. Facebook researchers show random methods without any training can outperform modern sentence embeddings models for sentence classification Stanford experiment results on how deactivating Facebook affects social welfare measures Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities, TechCrunch reports
Read more
  • 0
  • 0
  • 13746
article-image-is-the-commons-clause-a-threat-to-open-source
Prasad Ramesh
12 Sep 2018
4 min read
Save for later

Is the ‘commons clause’ a threat to open source?

Prasad Ramesh
12 Sep 2018
4 min read
Currently, free and open source software means anyone can modify and repurpose it for their needs. This also means that companies can take advantage of such open source software and use it for their commercial advantage. The ‘commons’ clause aims to change that and forbids monetization that mostly entails commercial use. The case in favor of the commons clause Companies that commercialize open source projects don’t give much back and this is an abuse of open source projects. The projects are open source for the idea to promote sharing and learning, not necessarily for tech giants to use it commercially make money from projects that were available freely. This is not illegal but can be just viewed as an abuse of open source projects where it is being used to make money without the creators/community getting anything back. What is the commons clause? The Commons Clause website states contributed by FOSSA, the founder and CEO is Kevin Wang. The task of drafting the commons clause was handed over to open source lawyer Heather Meeker. It is not a license itself but an additional clause that can be added to open source project licenses. It adds a narrow commercial restriction on top of the existing open-source license. The additional clause restricts the ability to ‘sell’ the software while keeping all the original license permissions unchanged. This is in the interest of preserving open source projects and helping them thrive. To avoid any confusion, when commons clause is added to a project, it is no longer ‘open source’ by the formal definition. Adding commons clause means the project still has many elements aligning to an open source project like free access, freedom to modify and redistribute but not sell. Basically, when commons clause is added to a project, the project can no longer be monetized. The Commons Clause FAQ states: “The Commons Clause was intended, in practice, to have virtually no effect other than force a negotiation with those who take predatory commercial advantage of open source development. In practice, those are some of the biggest technology businesses in the world, some of whom use open source software but don’t give back to the community. Freedom for others to commercialize your software comes with starting an open source project, and while that freedom is important to uphold, growth and commercial pressures will inevitably force some projects to close. The Commons Clause provides an alternative.” The case against the commons clause There are discussions on various forums regarding this clause with conflicting views. So, I will try to give my views on this. Opposers of the clause believe a software becomes propriety on applying commons clause. This means that any service created from the original software remains the intellectual property of the original company to sell. The fear is that this would discourage the community from contributing to open-source projects with a commons clause attached since the new products made will remain with the company. Only they will be able to monetize it if they choose to do so. On the one hand, companies that make millions of dollars from open source software and giving anything back is not in line with the ethos of open source software. But on the other hand, smaller startups and individual contributors get penalized by this clause too. What if small companies contribute to a large open source project and want to use the derived product for their growth? They can’t anymore if the commons clause is applied to the project they contributed to. It is also not right to think that a contributor deserves 50% of the profits if a company makes millions of dollars using their open source project. What can be done then? The commons clause doesn't really help the open source community, it only prevents bigger companies from monetizing it unfairly. I think major tech companies can license open source software strictly for commercial use separately. Perhaps a financial profit benchmark (say $100,000) can be made for paid licensing. That is, if you make x money from the open source software, pay for a license for further use. This will help small companies from running out of money and force closing their source. The commons clause currently is at 1.0, and there will be future revisions. It was recently adopted by Redis after Amazon using their open source project commercially. For more information, you can visit the Commons Clause website. Storj Labs’ new Open Source Partner Program: to generate revenue opportunities for open source companies Home Assistant: an open source Python home automation hub to rule all things smart NVIDIA open sources its material definition language, MDL SDK
Read more
  • 0
  • 0
  • 13741

article-image-adobe-to-spot-fake-images-using-artificial-intelligence
Natasha Mathur
26 Jun 2018
3 min read
Save for later

Adobe to spot fake images using Artificial Intelligence

Natasha Mathur
26 Jun 2018
3 min read
Adobe had already been venturing into the AI domain by coming up with products such as Adobe Sensei. Now, Adobe has developed a product that is said to be using Artificial Intelligence for detecting images that are heavily edited or images that have been tinkered with. Adobe is aiming to create more products in the AI space in order to build trust among people in digital media. Adobe has been widely used for editing images that express artistic creativity. However, some people use it to their own unfair advantage by manipulating images for deception. But, with AI in the game, the image deception problem seems to be getting fixed. A senior research scientist at Adobe, Vlad Morariu, has been working on computer vision technologies for a while now to detect manipulated images. Vlad mentions that there are existing tools which help trace digitally altered photos. For instance, different file formats have metadata that store information about the image captured and manipulated. Also, forensic tools help detect the altered images by analyzing strong edges, lighting, noise distribution, and pixel values of a photo. But, these tools are not as efficient at detecting fake images. Source: Adobe Vlad’s continuous research led him to come up with three new techniques in Artificial Intelligence for detecting image manipulation. Splicing: This combines different parts of two different images. Copy-move: This involves cloning or moving of objects within a photograph from one place to another. Removal: In this, an object within a photograph is removed and that space is filled in. This has greatly cut down on the time it would take forensic experts to detect fraud images. Vlad also mentions how they have trained a deep learning neural network to detect deception on thousands of known, manipulated images. It combines two different methods in one network to enhance the detection process even more. The first method makes use of RGB stream to detect tampering. And the second method uses a noise stream filter. Although these techniques are not foolproof, they provide more options for controlling digital manipulation currently. Adobe might get its hands dirty in the AI world even more in the future by including tools for detection of other kinds of manipulation in photographs. To know more about Adobe’s effort in controlling digital manipulation, check out Adobe’s official blog post. Adobe glides into Augmented Reality with Adobe Aero Adobe is going to acquire Magento for $1.68 Billion  
Read more
  • 0
  • 0
  • 13739
Modal Close icon
Modal Close icon