Category Archives: Science

Green IT: How sustainable applications reduce CO2 emissions

Software consumes a lot of energy. A key to sustainable applications: demand shaping.

Image: Programming, Free Stock Picture, MorgueFile.com.
Image: Programming, Free Stock Picture, MorgueFile.com.

According to forecasts by the Green Software Foundation, information and communication technology will account for around 20 percent of all electricity consumption by 2030. Emissions from digital technologies will double by 2025 compared to 2019 levels.

But the technology industry is becoming increasingly aware of its carbon footprint. Last but not least, against the background of the energy crisis, the importance of green IT is becoming increasingly apparent.

Green IT summarizes all measures that combine technological progress with environmental protection. A distinction is made between Green by IT and Green in IT. Green by IT are technologies that actively help to achieve sustainability goals. Such as software that makes consumption measurable and shows potential for optimization. Green in IT, on the other hand, aims to optimize IT processes. So that they have the least possible negative or even positive impact on the environment and resources.

This is not primarily about limitations, but about responsible and resource-saving use of technology. The greatest possible benefit should be obtained from every gram of CO₂ emitted into the atmosphere. This enables the demand shaping principle in software development.

Demand shaping

Demand shaping is a strategy to influence demand to match existing supply. Accordingly, when supply is low, demand is reduced and increases with supply accordingly. An example of this is video conferencing. When the user has low bandwidth, the video quality is reduced while the essential audio quality remains high. Demand (video quality) is adjusted to match supply (bandwidth).

Another example of demand adaptation is progressive enhancement in web design. The most basic form of a website is made available for older browsers and with low bandwidth. The more resources and bandwidth a user has available on their device, the more features are provided. But these are optional.

This principle can also be used to achieve energy efficiency. The energy requirements of applications are matched to availability. Demand shaping is therefore opposed to the widespread over-provisioning principle of providing more resources than are necessary to cover peak loads or increasing demand.

Through demand shaping, so-called “eco modes” can be built into software applications. Similar to those in cars and household appliances. The application can be used in an emissions-friendly way at the expense of performance or at full power with higher energy consumption. Applications can either be set to eco mode by default, or users can choose. Based on the nudging principle.

Another example of sustainable applications are applications optimized for edge computing. Data and process steps or complete applications are brought closer to the users instead of being processed in remote data centers. This not only reduces latency, but also CO₂ emissions, since less energy is required to transmit the data.

Renewable energy

Applications can also be programmed in such a way that the respective mode – energy saving or maximum performance – is made dependent on the availability of renewable energies.

Demand shaping is thus related to the principle of demand shifting, i.e. the shifting of demand. Here the demand for computing, storage or network resources is shifted to other regions or to times when the availability of renewable energies is higher. Companies should rely on solutions that automatically move computing, storage and network resources to where the carbon footprint is lowest.

Both demand shaping and demand shifting are important to reduce CO₂ consumption in IT. Depending on the application, developers should determine whether the computing power of applications should be reduced or relocated if the CO₂ intensity is high.

This is how ChatGPT works

The powerful language model ChatGPT generates texts that can hardly be distinguished from those of human authors. We explain the technology behind the hype.

Image: Programming, Free Stock Picture, MorgueFile.com.

Since the US company OpenAI released its new artificial intelligence (AI) ChatGPT for free testing at the end of November last year, users on social media have been sharing masses of examples of how the chatbot answers knowledge questions, formulates e-mails, writes poems or texts summarizes.

ChatGPT’s ability to confidently deal with natural language and to understand complex relationships with a high hit rate is seen by some observers as another milestone on the way to strong artificial intelligence – i.e. to algorithms that are on a par with human thinking ability in every respect. But how does the technology that makes all this possible work?

Six years – an AI eternity

ChatGPT is a language model, i.e. a machine learning algorithm that specializes in processing texts. ChatGPT is the latest generation in a series of language models based on the so-called Transformer model introduced in 2017. The Transformer architecture caused a stir when it was released in professional circles because it enabled specialized language models for text translation and other tasks with unprecedented power.

As early as 2018, OpenAI published the Generative Pretrained Transformer (GPT) as a modification of the Transformer with a simplified structure (PDF) . A major innovation was the idea of ​​no longer training the language model for a special task such as translation or classification of texts, for which only limited amounts of sample data are often available.

Instead, the GPT model was pre-trained on very large data sets of generic texts in order to learn statistical properties of language as such independently of the specific task. The model prepared in this way could then be effectively adapted with smaller sets of sample data for specific tasks.

The next version GPT-2 appeared in 2019 (PDF) . It was essentially a scaled-up version of the previous model with a significantly higher number of parameters and with training on correspondingly larger data sets. In contrast to the original version, GPT-2 was no longer adapted for special problems, but was able to solve many different tasks such as translating texts or answering knowledge questions simply by training with generic texts from the Internet.

With 175 billion parameters, the third generation GPT-3 (PDF) was even more extensive than GPT-2 and correspondingly more powerful. It also attracted attention beyond the AI ​​research community, particularly with its ability to write longer texts that were almost indistinguishable from those of human authors.

However, limitations of the model also became apparent, including ethical issues with objectionable or biased texts, and the habit of making grossly false statements of fact in persuasive-sounding language.

In order to remedy these shortcomings, OpenAI added a fundamentally new dimension to the training concept for its next language models InstructGPT and ChatGPT : Instead of leaving a model alone with huge amounts of text from the Internet, it was subsequently taught by human “teachers”, concrete ones To follow the instructions of the users and to make statements that are ethically justifiable and correct in terms of content. In order to ensure the effectiveness of this training, the algorithmic approach of the pure transformer model had to be expanded by a further step – the so-called reinforcement learning.

The impressive achievements of ChatGPT are the result of a whole range of different algorithms and methods as well as many tricks, some of which are very small. In this article, the focus is on providing an intuitive basic understanding of the technology without getting bogged down in too many mathematical or technical details. The links in the text refer to sources that fill in the gaps in this presentation.

Competition for SSD

Competition for SSD: 1.4 petabytes of data on future magnetic tape

Tape storage technology seemed to be falling short of expectations recently. Now it looks like it’s catching up again – thanks to a new material.

Linear Tape Open (LTO) first appeared on the market in 2000. At that time, the storage media that recorded data as encoded tracks on magnetic tape still had a capacity of 200 gigabytes.

The tapes have now reached their ninth generation and are currently able to record data volumes of 18 terabytes (uncompressed). The technology is a cost-effective way, especially for companies, to preserve critical data reliably and durably.

Image: Data Storage, Free Stock Picture, MorgueFile.com.
Image: Data Storage, Free Stock Picture, MorgueFile.com.

LTO09 still fell short of expectations

But what sounds like an incredible amount of storage space was initially a disappointment. LTO09 was originally supposed to offer 24 terabytes of space, but could not meet these expectations.

According to a new roadmap presented by the developers IBM, HPE and Quantum at the beginning of September, it should be possible to store a whopping 1,440 terabytes (1.4 petabytes) from the 14th generation, which is expected to appear in 2033/34.

Magnetic storage tapes: New material creates new possibilities

According to Heise Online , this is made possible by coating the magnetic tapes with strontium ferrite (SrFe) instead of barium ferrite (BaFe), which has been used up until now. The first prototypes have already been developed and tested by Fujifilm. The new material is to be used from LTO13.

In contrast to SSD hard drives, whose maximum capacity is currently around 100 terabytes, LTO is also significantly cheaper. While the most expensive SSD medium with a price of 40,000 US dollars causes costs of 2.5 dollars per gigabyte, LTO are only 0.01 dollars per gigabyte.

According to Sam Werner, IBM vice president of storage product management, LTO “provides organizations with a sustainable, reliable and cost-effective solution to protect and store their critical business data.”

Samsung has a new battery issue that can affect any Galaxy phone

Samsung has a new battery problem with its Galaxy smartphones. After the disaster with the Galaxy Note 7, which exploded and was recalled worldwide, older smartphones are now bloating and can become an undiscovered danger.

Image: Circuit Board Chip, Free Stock Picture, MorgueFile.com.
Image: Circuit Board Chip, Free Stock Picture, MorgueFile.com.

Samsung smartphones are bloating

Samsung has promised that after the catastrophic incidents involving the Galaxy Note 7 , the batteries will be safe and there will be no more problems. As it turns out, Samsung has an even bigger battery problem than just one model. YouTuber Mrwhosetheboss has found many Samsung phones in his smartphone collection with swollen batteries. A reaction and gas formation probably occurs inside, so that the battery swells up and the back bursts open . In this case, there is no explosion or fire. The batteries are therefore fundamentally safe.

It becomes critical if the battery is still bloating and you don’t notice it immediately. In fact, the problem tends to affect older Samsung phones that are stored with an empty battery. Our Galaxy S6 edge has also ballooned, as you can see in the cover photo above . But Mrwhosetheboss has also noticed early signs of a swelling battery on his Galaxy S20 FE and Galaxy Z Fold 2. And then it gets dangerous. If gases are produced and the battery swells, there could be a reaction and excessive heat development when charging the smartphone.

How should you store a Samsung phone?

Mrwhosetheboss gives an important tip on how to store your Samsung phones when you are no longer using them. Then you should charge the battery to about 50 percent. This should reduce the risk of the battery bloating. If you are still using an older Samsung cell phone, you should regularly check whether the battery has not already swelled up slightly. Then you shouldn’t charge your cell phone anymore and turn to Samsung. We will seek an opinion from Samsung on the matter.

Hacker Attacks on Crypto Protocols: Nearly $500M in Damage Last Quarter

Image: Hacker, Free Stock Picture, MorgueFile.com.

As the company Atlas VPN has found, hacker attacks have been particularly successful in recent months. Chainalysis had already warned of a record month.

Although the cryptocurrency sector is now much better regulated and more and more investors are taking the necessary steps to increase security – such as storing their own coins in hardware wallets – hacker attacks are still a big issue in the cryptocurrency sector, albeit only in relation to the volume traded affect a fraction of the sector.

In one quarter, hackers cause almost half a billion dollars in damage

According to Atlas VPN data , in the third quarter of 2022, criminals stole around $483 million worth of cryptocurrencies through targeted attacks. The number of hacks fell by 43 percent compared to the second quarter. In the first quarter, the damage amounted to around 1.3 billion US dollars.

Even if the damage appears large in absolute numbers and was certainly significant for those affected, in relation to the size of the crypto sector with a value of around $970 billion according to CoinMarketCap, it is not quite as dramatic as it might seem at first glance.

Ethereum, Polkadot and BNB Chain particularly affected

The hacks primarily affected the Ethereum network. A total of 11 attacks on Ethereum blockchain-based protocols caused $348 million in damage. However, considering that most protocols run on Ethereum, this is not surprising. For Polkadot, it was $52 million in just two attacks. While projects on the BNB chain have been attacked 13 times, the damage amounts to only $28 million.

It is important here that the blockchains themselves are not attacked. Instead, it is mainly smart contracts in the DeFi area that cause security gaps.

This quarter could be a record

Chainalysis also deals primarily with the damage caused by cybercrime in the cryptocurrency sector. The figures determined by Atlas VPN for the third quarter correspond to the information from Chainalysis, which expects a record month for October. As Chainalysis announced on October 12, eleven hacker attacks with damage totaling $718 million had already been registered by then.

If the trend of the month continues, the fourth quarter is likely to be the most momentous for the cryptocurrency sector. The BNB chain hack caused a stir this month , in which at least no funds were stolen from other users. Instead, the attackers created over $100 million worth of coins out of thin air.