Volunteer Computing at Omega Verksted!
BOINC
Omega Verksted uses the Berkeley Open Infrastructure for Network Computing (BOINC) to contribute to various science projects using excess computing power.
BOINC works like this:
Scientists prepare a project which needs huge computational power and, at the same time, can be divided in small parts that may be run as parallel computations. A server is prepared and the BOINC infrastructure installed on it. From that moment everyone connected to the Internet can download the client program. It monitors their computers, and if machine processors becomes idle, it downloads some project data and runs calculations.
The computer workstations available at Omega Verksted, as well as our server infrastructure, are used when idle. A portfolio of projects is used to distribute workloads across our various hosts according to current project demands.
Performance Data
trashlove@omegav BOINCstats
Note on Interpreting Credits
"Credits" are rewarded to users to signify their computation contributions. BOINC has given a specification for how much one credit is supposed to be worth, but it's up to each project to implement their calculation of credits. Unfortunately (long story short) some projects have failed massively to follow the spec and/or provide a credit computation that can be reasonably comparable to other projects. Additionally, CPU work units and GPU work units are generally not comparable. This is mostly because credits are calculated based on FLOPS. Since GPUs perform highly parallel computations, they always have more throughput and therefore yield more credits than CPUs. CPU computations are still a valuable contribution, as many projects cannot be (or have not been) effectively parallelized, but unfortunately this is not reflected in credits.
Bottom line, the BOINCstats leaderboards are effectively broken with regards to directly comparing credits. For those interested in looking more closely at the leaderboards, Bitcoin Utopia and Collatz Conjecture are examples of projects that do not give reasonably comparable scores. As a consequence the leaderboard tops largely consist of people whose credits are mostly made up of these two projects.
The spec definition: One Credit is 1/200 day of CPU time on a reference computer that does 1 GFLOPS based on the Whetstone benchmark.
Currently Supported Projects
Project | TL;DR |
---|---|
Climate Prediction | Runs climate models |
World Community Grid | Umbrella project for various humanitarian projects, including analyzing aspects of the human genome, HIV, dengue, muscular dystrophy, cancer, influenza, Ebola, virtual screening, rice crop yields, and clean energy |
Distr. Hardware Evolution | Uses an evolutionary algorithm to create or optimize new designs for integrated circuits |
Einstein@Home | Searches for weak astrophysical signals from pulsars using data from the LIGO gravitational-wave detectors, the Arecibo radio telescope, and the Fermi gamma-ray satellite |
LHC@Home | Helps process raw sensor data from the LHC |
Rosetta@Home | Computes the minimum-energy physical configuration of proteins ("protein folding") to predict how they will behave, with applications in medicine and biology |
Universe@Home | Various research areas in astronomy, including Ultraluminous X-ray Sources, Gravitational Waves and Supernovae Ia |
GPUGRID | Performs biomedical research using GPU power |
Folding@Home
Siden det ble hipt å kjøre Folding@Home igjen for å bidra til å bekjempe SARS-CoV2 har vi også satt opp Folding@Home på OVPool.
Det finnes en Xen VM template, slik kan man gå frem for å opprette en ny VM relativt kjapt (trenger EDB-tilgang):
- Opprett en VM fra template "Folding@Home"
- Sett affinity host til den fysiske serveren man tenker å kjøre på (og evt. tilpasse antall cores til det fysiske antallet som finnes der)
- Boot VMen og endre hostname (til folding[n], der [n] = nummer til affinity host - f.eks. folding7 er den som kjører på xen7)
Så kan man logge inn på heimdall og sette opp konfig:
- Statisk DHCP: 172.28.10.[n]
- (Lag gjerne et alias til den IP-adressen så det blir lettere å konfigurere senere)
- Sett opp port forwarding (i NAT): 3633[n] remote til 36330 lokalt på IP-adressen fra forrige steg
Deretter kan VMen legges til i FAHControl for remote status og styring - passord for remote access er det på hylla.