Category Archives: TechNews

Apple Pay Later: Pre version launched in the US

After delays, a preliminary version of Apple Pay Later has now been launched for selected users in the USA. This should be able to split payments into four installments.

Apple Pay users can now also pay in installments – at least in the USA. There, the tech giant from Cupertino has now launched a pre-version of Apple Pay Later. This was announced by Apple via a company announcement .

You can now pay in installments with Apple Pay. (Photo: nikkimeel/Shutterstock)
You can now pay in installments with Apple Pay. (Photo: nikkimeel/Shutterstock)

Pre-release version for select US customers only

However, even in the United States, initially only randomly selected users can benefit. You will receive an invitation for the pre-release version. Customers who want to enjoy this experience in the US must also have an iPhone with the recently released iOS 16.4 or an iPad with iPadOS 16.4.

It is not yet clear when the full version will start in the USA. Apple is talking about the next few months.

Apple had already presented Pay Later in June 2022 at the Worldwide Developers Conference 2022. However, due to delays caused by alleged technical problems, the playout was pushed back. Then, earlier this year, Apple tested the feature in beta, first by employees and then by retail staff.

This is how installment payments via Apple Pay work

With Apple Pay Later, users can split payments into up to four installments. These must be paid within six weeks. Interest and fees do not apply. In theory, you can pay at all retailers that support Apple Pay.

The loans that can be applied for through Pay Later range from a minimum of $50 to a maximum of $1,000. According to Apple, a “gentle credit check” runs in the background for every transaction.

Refunds can only be processed via debit cards. Credit cards are not accepted as they could send customers deeper into a credit spiral.

Management of installments via Apple Wallet

Users can track and manage when the installments are due via Apple Wallet. Pay Later is fully integrated into the app. Just before the installments are due, Wallet sends a notification to the user.

To ensure security and privacy, Apple Pay Later authenticates transactions via Face ID, Touch ID, or passcode.

This is how ChatGPT works

The powerful language model ChatGPT generates texts that can hardly be distinguished from those of human authors. We explain the technology behind the hype.

Image: Programming, Free Stock Picture, MorgueFile.com.

Since the US company OpenAI released its new artificial intelligence (AI) ChatGPT for free testing at the end of November last year, users on social media have been sharing masses of examples of how the chatbot answers knowledge questions, formulates e-mails, writes poems or texts summarizes.

ChatGPT’s ability to confidently deal with natural language and to understand complex relationships with a high hit rate is seen by some observers as another milestone on the way to strong artificial intelligence – i.e. to algorithms that are on a par with human thinking ability in every respect. But how does the technology that makes all this possible work?

Six years – an AI eternity

ChatGPT is a language model, i.e. a machine learning algorithm that specializes in processing texts. ChatGPT is the latest generation in a series of language models based on the so-called Transformer model introduced in 2017. The Transformer architecture caused a stir when it was released in professional circles because it enabled specialized language models for text translation and other tasks with unprecedented power.

As early as 2018, OpenAI published the Generative Pretrained Transformer (GPT) as a modification of the Transformer with a simplified structure (PDF) . A major innovation was the idea of ​​no longer training the language model for a special task such as translation or classification of texts, for which only limited amounts of sample data are often available.

Instead, the GPT model was pre-trained on very large data sets of generic texts in order to learn statistical properties of language as such independently of the specific task. The model prepared in this way could then be effectively adapted with smaller sets of sample data for specific tasks.

The next version GPT-2 appeared in 2019 (PDF) . It was essentially a scaled-up version of the previous model with a significantly higher number of parameters and with training on correspondingly larger data sets. In contrast to the original version, GPT-2 was no longer adapted for special problems, but was able to solve many different tasks such as translating texts or answering knowledge questions simply by training with generic texts from the Internet.

With 175 billion parameters, the third generation GPT-3 (PDF) was even more extensive than GPT-2 and correspondingly more powerful. It also attracted attention beyond the AI ​​research community, particularly with its ability to write longer texts that were almost indistinguishable from those of human authors.

However, limitations of the model also became apparent, including ethical issues with objectionable or biased texts, and the habit of making grossly false statements of fact in persuasive-sounding language.

In order to remedy these shortcomings, OpenAI added a fundamentally new dimension to the training concept for its next language models InstructGPT and ChatGPT : Instead of leaving a model alone with huge amounts of text from the Internet, it was subsequently taught by human “teachers”, concrete ones To follow the instructions of the users and to make statements that are ethically justifiable and correct in terms of content. In order to ensure the effectiveness of this training, the algorithmic approach of the pure transformer model had to be expanded by a further step – the so-called reinforcement learning.

The impressive achievements of ChatGPT are the result of a whole range of different algorithms and methods as well as many tricks, some of which are very small. In this article, the focus is on providing an intuitive basic understanding of the technology without getting bogged down in too many mathematical or technical details. The links in the text refer to sources that fill in the gaps in this presentation.

Competition for SSD

Competition for SSD: 1.4 petabytes of data on future magnetic tape

Tape storage technology seemed to be falling short of expectations recently. Now it looks like it’s catching up again – thanks to a new material.

Linear Tape Open (LTO) first appeared on the market in 2000. At that time, the storage media that recorded data as encoded tracks on magnetic tape still had a capacity of 200 gigabytes.

The tapes have now reached their ninth generation and are currently able to record data volumes of 18 terabytes (uncompressed). The technology is a cost-effective way, especially for companies, to preserve critical data reliably and durably.

Image: Data Storage, Free Stock Picture, MorgueFile.com.
Image: Data Storage, Free Stock Picture, MorgueFile.com.

LTO09 still fell short of expectations

But what sounds like an incredible amount of storage space was initially a disappointment. LTO09 was originally supposed to offer 24 terabytes of space, but could not meet these expectations.

According to a new roadmap presented by the developers IBM, HPE and Quantum at the beginning of September, it should be possible to store a whopping 1,440 terabytes (1.4 petabytes) from the 14th generation, which is expected to appear in 2033/34.

Magnetic storage tapes: New material creates new possibilities

According to Heise Online , this is made possible by coating the magnetic tapes with strontium ferrite (SrFe) instead of barium ferrite (BaFe), which has been used up until now. The first prototypes have already been developed and tested by Fujifilm. The new material is to be used from LTO13.

In contrast to SSD hard drives, whose maximum capacity is currently around 100 terabytes, LTO is also significantly cheaper. While the most expensive SSD medium with a price of 40,000 US dollars causes costs of 2.5 dollars per gigabyte, LTO are only 0.01 dollars per gigabyte.

According to Sam Werner, IBM vice president of storage product management, LTO “provides organizations with a sustainable, reliable and cost-effective solution to protect and store their critical business data.”

Nvidia GeForce Now adds mobile touch controls to a new collection of games

Nvidia has expanded the list of titles available on its cloud gaming platform that work with touch controls.

GeForce Now has already offered some onscreen controls for mobile gamers – notably for Fortnite and Genshin Impact. This list is now expanding to include a number of games that work without a joypad on phones and tablets.

These include Trine 2, Slay the Spire, and Dota Underlords. There are also some games that only work with touch controls on the tablet, including Shadowrun Returns, Talisman: Digital Edition, and Magic: The Gathering Arena.

Image: Video Game, Free Stock Picture, MorgueFile.com.
Image: Video Game, Free Stock Picture, MorgueFile.com.

The full list of touch-enabled games:

Mobile and tablet

  • Dota Underlords (Steam)
  • Fortnite (Epic Games)
  • Genshin Impact (HoYoverse)
  • Into the Breach (Steam und Epic Games)
  • Papers, Please (Steam)
  • Slay the Spire (Steam)
  • Tabletop Simulator (Steam)
  • Trine 2: Complete Story (Steam)

Tablet only

  • Bridge Constructor Portal (Steam)
  • Door Kickers (Steam)
  • Magic: The Gathering Arena (Wizards.com und Epic Games)
  • March of the Empires (Steam)
  • Monster Train (Steam)
  • Shadowrun Returns (Steam and Epic Games)
  • Talisman: Digital Edition (Steam)

Nvidia’s GeForce Now works with your own games purchased from various supported digital stores such as Steam and Epic Games Store. Some, like Fortnite, are free to play.

The GeForce Now app for Android also now has a new Mobile Touch Controls row where you can find supported games. The Android app now also includes Apaptive VSync support for select games to improve performance.

Samsung has a new battery issue that can affect any Galaxy phone

Samsung has a new battery problem with its Galaxy smartphones. After the disaster with the Galaxy Note 7, which exploded and was recalled worldwide, older smartphones are now bloating and can become an undiscovered danger.

Image: Circuit Board Chip, Free Stock Picture, MorgueFile.com.
Image: Circuit Board Chip, Free Stock Picture, MorgueFile.com.

Samsung smartphones are bloating

Samsung has promised that after the catastrophic incidents involving the Galaxy Note 7 , the batteries will be safe and there will be no more problems. As it turns out, Samsung has an even bigger battery problem than just one model. YouTuber Mrwhosetheboss has found many Samsung phones in his smartphone collection with swollen batteries. A reaction and gas formation probably occurs inside, so that the battery swells up and the back bursts open . In this case, there is no explosion or fire. The batteries are therefore fundamentally safe.

It becomes critical if the battery is still bloating and you don’t notice it immediately. In fact, the problem tends to affect older Samsung phones that are stored with an empty battery. Our Galaxy S6 edge has also ballooned, as you can see in the cover photo above . But Mrwhosetheboss has also noticed early signs of a swelling battery on his Galaxy S20 FE and Galaxy Z Fold 2. And then it gets dangerous. If gases are produced and the battery swells, there could be a reaction and excessive heat development when charging the smartphone.

How should you store a Samsung phone?

Mrwhosetheboss gives an important tip on how to store your Samsung phones when you are no longer using them. Then you should charge the battery to about 50 percent. This should reduce the risk of the battery bloating. If you are still using an older Samsung cell phone, you should regularly check whether the battery has not already swelled up slightly. Then you shouldn’t charge your cell phone anymore and turn to Samsung. We will seek an opinion from Samsung on the matter.