Blog

Showing 3 of 3 Results

04/16/2024
profile-icon Liselotte Brandstrup


By Lars Nondal

It was recently announced that Denmark is going to build one of the world’s most powerful AI supercomputers already this year. The name of the new supercomputer will be Gefion and the organization to run it will be the Danish Centre for AI Innovation.

The Danish Centre for AI Innovation is financed by The Novo Nordisk Foundation and the Export and Investment Fund of Denmark (EIFO) and will have CBS Chairman of the Board Torben Möger Pedersen as Chairman of the Board.

So, what exactly is it that is going to be built? And is it of any interest at all to CBS researchers?
Gefion is expected to be a “top 25 supercomputer”, referring to the bi-annual Top500 Ranking list. This means that it will not be in the absolute top but will still boast far more computer power than seen in Denmark before.

Essentially Gefion is a large GPU Cluster (Graphical Processing Unit), consisting of 1.528 GPU cores (H100 Tensor Core GPUs) from NVIDIA and with an extremely fast internal network combining the cores (Quantum-2 InfiniBand networking, 400 GB/s).

The H100 GPU is designed for large-scale AI and high-performance computing (HPC) workloads and is more than six times faster than the previous state-of-the-art A100 GPU.
Find out more about performance differences

The Danish Centre for AI Innovation’s cooperation with NVIDIA is not only about the infrastructure and the hardware but also includes access to NVIDIAs software platforms, training, and expertise.

There are no GPU clusters the size of Gefion or anywhere close available in any of the large university-driven supercomputers in Denmark (Computerome DTU/KU, Genome.dk/AU, Sophia/DTU). There are some GPUs available from SDU and AAU through DeiC Interactive HPC, including a small number of H100. See article below.

What makes GPUs so important for anything connected with AI and why cannot we simply rely on the traditional CPU (Central Processing Unit) computer?

According to professor Claudio Pica (Director of SDU eScience Center) “…The GPU contains thousands of small cores which can perform simple tasks many thousand times faster than a CPU…”. Click to learn more

The time-saving aspect of the performance of these supercomputers could prove very valuable to CBS researchers working with large dataset.

Can CBS researchers get access to the Gefion computer?
We are still waiting for more information about the access and cost model (the prices!) of the Gefion computer, but it is expected that companies and industry will have to pay more than universities and researchers.

At the presentation event, there was a lot of talk about the impact of the new computer on Danish research and innovation in engineering, life sciences, medicine, and other natural sciences, and not a lot of talk about the social sciences and humanities, but hopefully, access will be possible for researchers in Denmark regardless of discipline, if only the scientific quality of candidate research projects is deemed to be high enough.

But do CBS researchers need access? 
Do social sciences & humanities researchers need that much computing power? A clear and distinct answer is probably. Not necessarily right at this moment, judging by the sizes of data sets and the complexity of analyses performed by CBS researchers. But things are changing, and if we look at the kinds of AI/ML computations that CBS and other social sciences/humanities universities are gradually moving into, the need will most probably arise in one-two years.

The most obvious example would be Deep Learning (needed for the training of LLMs (Large Language Models). Other examples could be the analysis of high-frequency financial data or computer vision/image identification (needed for predictive maintenance in an industry).

Read more:
Novo Nordisk Foundation, NVIDIA partner on AI research center

 

Denmark to build one of the world’s most powerful AI supercomputers, accelerating solutions to societal challenges

https://deic.dk/da/news/2024-4-10/deic-interactive-hpc-faar-nvidia-hopper-gpuer (in Danish only)

Questions? Please reach out to Lars Nondal

This post has no comments.
04/16/2024
profile-icon Liselotte Brandstrup

By Lars Nondal

As mentioned in the previous article, CBS researchers might well need more computer power shortly.

How do I know? Because CBS researchers and students were very keen to jump on the wagon when the H100 GPU was first made available to them (on a smaller scale, of course) through the national HPC (High Performance Computing) services provided by DeiC earlier this year.

A few months ago, SDU eScience Center introduced 16 H100 GPU cores in DeiC Interactive HPC. So far, a substantial amount of these resources has been allocated to CBS, for free or, more precisely, paid for by the annual CBS contribution to DeiC. We still do not know the the Gefion computer cost model, but we expect that researchers and universities will have to bear a least some of the costs of running the system.

Increasing use of HPC at CBS
The H100 GPUs have already proved to be quite popular, even if they have never been advertised at CBS until now. The CBS RDM Support has received several grant applications for access to H100 GPUs from CBS researchers as well as master students writing their final thesis.

Without going too much into detail, we have listed some of the ongoing CBS projects using H100 GPUs on DeiC Interactive HPC (UCloud) right now:

  • Project(s) working with Quality Control (computer vision and image detection) in manufacturing, using the deep learning Yolo models (You Only Look Once).
  • Project working with Predictive Maintenance in Industry (sensor data, LSTM (Long Short-Term Memory) neural network)
  • Project working with applying Transformer architectures from NLP (Natural Language Processing) in financial time series forecasting and asset management.

The H100 GPUs are also being used in teaching. The course Artificial Intelligence and Machine Learning (Daniel Hardt/MSC, Nicolai Blegvad Thomsen/MSC) has a special setup with a version of Mistral, one of the largest available LLMs (Large Language Models), being downloaded and made available to students in the course, not for training or fine-tuning of the Mistral model - that would put too much demand on our limited pool of H100 GPU resources - but for inference/prompting via API and JupyterLab notebooks (Python).

Find out how to access DeiC Interactive HPC and how to apply for computing and storage resources

Check out our GitHub page for tutorials, guides, etc.

Please direct all questions to rdm@cbs.dk

This post has no comments.
09/15/2023
profile-icon Liselotte Brandstrup


Written By Lars Nondal

Twice a year, Danish researchers can apply for resources on UCloud/ DeiC Interactive HPC, one of the national High-Performance Computing Services provided by the Danish e-Infrastructure Cooperation (DeiC).

Recently, two CBS research projects were granted large amounts of HPC computing and storage resources. The two research projects are led by Lasse Heje Pedersen from the Department of Finance (FI) and Jan Stuckatz from the Department of International Economics, Governance and Business (EGB).

Lasse Heje Pedersen has been granted 1,000,000 ‘CPU core hours’ and 3,000 GB storage to be used by the researchers/PhD students associated with the BIGFI project and its sub-projects.
The general idea is to merge big data (microdata) from the economic agents (households and financial institutions) with market data on aggregate outcomes to analyze or empirically examine the microfoundation of a phenomenon as well as its market-wide effects.
The BIGFI is a Center of Excellence under the Danish National Research Foundation.

- We are extremely grateful to have access to this high-performance computing, which means that researchers can quickly start implementing large models with big data, rather than everyone having to build their own infrastructure – and even better with this grant. -Lasse Heje Pedersen, FI

Jan Stuckatzhas been granted 30,000 CPU hours and 1,200 GB of storage for his project Money in Politics at Work: An Individual-level Analysis of Employee Campaign Donations.
The project investigates how businesses and economic elites influence U.S. politics, and how the economic power of corporations translates into political power. More specifically, by linking individual campaign donations data to U.S. national voter registration data (which party the voter is registered with, if registered), to investigate how important the workplace is for individual political donations.

Bonus information:
A “core-hour” is a unit of computational time where one core-hour represents running a single CPU for one hour, thus employing 1,000 CPUs for an hour equals a 1,000 core-hours. 
1 million core hours will give approx. 125,000 hours on a standard laptop and 15,625 hours on the largest UCloud machine.

Do you also need HPC computing and storage resources?
If you need more computing power, you can also apply. The next peer-reviewed application round for national resources will take place in September 2023, for resources to be used from Jan 1, 2024. We also have a pool of local resources you can apply for anytime. 
If you are interested, please do not hesitate to contact the RDM Support.

Click for more information about High-Performance Computing

 

This post has no comments.
Provided email address is invalid.
Field is required.
Field is required.