Sunday, October 30, 2011

New Parameters for Galacticus v0.9.1

A model such as Galacticus requires several input parameters to be set - these control everything from the cosmological model (which is well known) to the details of feedback from supernova explosions (which isn't well known). Choosing the best values for these parameters isn't easy - their effects on model results are non-linear and correlated. The best approach is to survey the parameter space using a Bayesian approach to constrain the model to fit various data sets. That work is under way, but it's a slow process. Until then, I've found a set of parameters which works reasonably well at matching a variety of data sets. These have been coded into v0.9.1 of Galacticus as the defaults, and are available as an input file here.

The stellar mass function looks like this:

Not too bad - the "bump" around M* is a limitation of the feedback model used, so there's scope of improvement there - it could be better, but I wanted to get a closer to the K-band luminosity function and HI mass function at z=0:

The model does very well at matching the color-magnitude relation of galaxies measured from the SDSS:
Other constraints that I looked at were the Tully-Fisher relation and galactic disk sizes (where this model has the common problems of producing disks which rotate somewhat too fast and having massive disks that are too small - problems that are likely related), the black hole mass-spheroid mass relation (which works quite well), the luminosity function of different morphological classes (which also works well), and the star formation history of the Universe (not bad, although somewhat too high at low redshifts).

There's room for improvement in all of these results - a systematic search of parameter space will either find parameters that give much better fits, or it will show that no parameter values give a good fit. In that case there'll be work to do figuring out what physics is missing or needs to be modelled more accurately.

Sunday, October 16, 2011

Running Galacticus in Parallel

Galacticus is very well suited to the "embarrassingly parallel" approach to dividing the work of computing a model between multiple CPUs. Typically, each merger tree in a Galacticus model evolves independently from all others. So, each available CPU can work on a separate merger tree, with the results being combined once all trees have been computed. In this tutorial, we describe two ways in which you can run Galacticus in parallel.

OpenMP

Under v0.9.1 of Galacticus, you can use OpenMP parallelism to automatically run merger trees on all available cores of a shared memory machine. Galacticus v0.9.1 is built with OpenMP parallelism by default, so simply run it as usual. Each available OpenMP thread will request a tree to process. Once it's finished with that tree, it will request another. This continues until all trees have been processed. The result is a single Galacticus output, which can be used just as any other. It is a good idea to set the input variable [mergerTreeBuildTreesProcessDescending] to true when using merger trees built with Press-Schechter techniques under OpenMP. This causes the most massive trees to be assigned to threads first, which results in better load balancing.

Multiple Worker Tasks

It is possible to split a single Galacticus model across CPUs on multiple different machines. Two input parameters, [treeEvolveWorkerCount] and [treeEvolveWorkerNumber], are used to perform the split. [treeEvolveWorkerCount] specifies how many workers the model should be split between, while [treeEvolveWorkerNumber] indicates the specific worker for a particular input file.

For example, to split a model between four machines, set [treeEvolveWorkerCount]=4 in each of four identical input files. Then, in each input file set [treeEvolveWorkerNumber] to one of 1, 2, 3, and 4. Then, invoke one instance of Galacticus on each machine, each using a different input file. Merger trees will be distributed between workers, and each will create its own output file.

Two scripts are available to help with this process. The first, scripts/aux/Split_Models_For_Workers.pl, will take a Galacticus input file and generate a set of input files which divide this model between a given number of workers. To use it, create your input file and then use, for example:


scripts/aux/Split_Models_For_Workers.pl inputParameters.xml 4


to divide the model between four workers. The script will create a set of input parameter files, named inputParameters_1.xml, inputParameters_2.xml, inputParameters_3.xml,and inputParameters_4.xml. The output file name for each input file will be the same as in the original inputParameters.xml file but will a numerical suffix appended.

The second script, scripts/aux/Merge_Models.pl, will merge together the outputs from all workers. Simply use:


scripts/aux/Merge_Models.pl model1.hdf5 model2.hdf5 model3.hdf5 model4.hdf5 model.hdf5


for example, to merge the four outputs model1.hdf5 model2.hdf5 model3.hdf5 model4.hdf5 into a single Galacticus file, model.hdf5 which can then be analyzed as normal. You can also use wildcards, e.g.:



scripts/aux/Merge_Models.pl model?.hdf5 model.hdf5


would achieve the same merge.

Wednesday, August 24, 2011

The Tortoise and the Hare

It's not uncommon that I hear from criticism that "Galacticus is too slow...". I can see this point of view, but I think it's missing something really, really crucial - something that Aesop understand in his fable of "The Tortoise and the Hare". The bottom line: it's better to get there slowly than not at all. Or, more specifically, it's better to get the right answer slowly than the wrong answer quickly.

First of all, we have to define what we mean by "slow" in this case. Typically, it's used to mean that Galacticus is slow compared to other, comparable models of galaxy formation.  That's correct. But the comparison is only really fair if the models being compared are solving the same physics, at the same level of complexity. Galacticus typically runs more complicated models of black hole evolution and star formation than other models for example. Fortunately, Galacticus can mimic the implementations of other models, as I discussed here. So, let's take the Baugh et al. (2005) model and run it within Galacticus, then compare it to the same model run with its original code.

It's already been demonstrated (see this article) that Galacticus produces results which are converged to the level of a percent or better due to the way that it constructs galaxy merger trees and solves the differential equations that describe galaxy formation. To do a fair comparison we need to run the Baugh et al. (2005) model using its original code to a comparable level of accuracy. The accuracy in that case is controlled by the number of steps, Nstep, used in solving the baryonic physics - this is a fixed number in the original Baugh et al. (2005) code, whereas in Galacticus steps are chosen adaptively to preserve a specified tolerance.

A typical number used is Nstep=100. So, we'll compare results for Nstep=100 and Nstep=300. As you can see in the plot below (which shows the mean masses of hot and cold gas and stars in dark matter halos of different masses at z=0), 100 steps is clearly not enough to get a comparable level of convergence. For example, the mean stellar mass in high mass halos is systematically reduced when using 300 steps by around 5%.


OK, so let's try Nstep=1000:
It's an improvement (at least for high-mass halos - not sure what's going on in the low-mas systems....) but still not perfect.

So, how long do these calculations take. Here are benchmarks for running these models (same physics, same implementation, same set of merger trees, same computer - so they're directly comparable):

Galacticus         :  31208 seconds
Original (Nstep= 100):   2070 seconds
Original (Nstep= 300):  22270 seconds
Original (Nstep=1000): 262866 seconds

So, Galacticus is certainly slower than the original model run with Nstep=100, but very similar in speed to the Nstep=300 case and much faster than Nstep=1000.

So, is Galacticus slow? Well, it depends what you want! It's certainly slower than some other models when they're run at their usual level of accuracy. Match accuracies though and it's as fast, if not faster.

Should we care about 5% accuracy though? After all, much of the physics going in to the models is uncertain by significant amounts (certainly much larger than 5%). I think the answer is that we should care. First, in any kind of computational physics we really should absolutely always do convergence tests and get our results to whatever accuracy you want. I'd aim for 1% in these models. Second, Galacticus and other models in its class are being used in large scale, Bayesian searches of the model parameter space to generate constraints on parameters by fitting to observational datasets. Many of those observational data have errors that are much smaller than 5% - we really want the model to be numerically accurate at a higher level than the data. Otherwise, the derived constraints will be subject to biases that arise purely from numerical artifacts that we could have removed by performing a more accurate calculation.

I'm hoping that we'll see more emphasis placed on assessing the accuracy of results from galaxy formation models in the future......

Tuesday, August 23, 2011

Galacticus v0.9.0 Released!

After a little over one year of development, I'm happy to announce that Galacticus v0.9.0 is now the official, stable version of Galacticus. This means that v0.0.1 gets retired (so it won't get further bug fixes) and development shifts to v0.9.1. More importantly, it means that v0.9.0 is the version you should use by default - no new features will be added making it stable, but bug fixes will be made where necessary.

While the code itself is now fixed, there are a couple of things still to come: an automated install script and some new parameter sets which should give good matches to observational data with v0.9.0.

There's a lot that's new in v0.9.0 - the full list of changes can be found here - but here are some of the highlights:

Black Holes

Since the standard black hole implementation has grown in complexity, I introduced a much simpler version which can be used when you don't care about having such a detailed treatment of black holes. In the simple black hole implementation the black hole mass grows at a rate proportional to the spheroid star formation rate and feedback physics is implemented in a simple way with fixed efficiencies.

Star Formation Rates

I've added four new star formation timescale algorithms for disks, all based on the surface density profile. These are the classic Kennicutt-Schmidt, the extended Schmidt law, the Biltz & Rosolowsky method and the Krumholz-McKee-Tumlinson method. These provide a better match of observed galaxies, either because they're empirically calibrated to do so, or because they actually include the relevant physics.

Chemistry and Molecular Hydrogen Cooling

Galacticus now has quite general support for tracking different chemical species and their reactions. This is used to follow a network of molecular hydrogen reactions for early Universe physics which is exploited in a molecular hydrogen cooling function.

Galaxy Clustering and the Halo Model

Halo model-based calculations of galaxy clustering can now be easily made using Galacticus. Biases and Fourier transformed dark matter halo density profiles can be output, and scripts which use these to constuct power spectra and correlation functions are provided. When merger trees from N-body simulations are used, positional information can be output directly, permitting direct calculations of galaxy clustering statistics.

Redshift Surveys

Simple support is provided for constructing redshift surveys of galaxies directly from Galacticus outputs, providing redshifts, comoving and luminosity distances.

Dust Absorption/Emission with Grasil

I've added a module which automates processing of Galacticus galaxies through the Grasil dust radiative transfer code, allowing calculation of SEDs from UV through radio wavelengths.

Running Grids of Models

The Run_Galacticus.pl script now allows for running of nested grids of parameters - makes it easy to explore model parameter space.

What's Next?

Work is already underway on v0.9.1 of Galacticus. Besides a significant restructuring of Galacticus' internal data structures (to make it easier to expand and modify components) and updated physics, expect more extensive support for C/C++ interoperability and both OpenMP and MPI parallelism. Also, I have a few optimization ideas that could potentially speed up the code significantly....

Friday, July 22, 2011

Reconstructing Other Models Using Galacticus

One of the original goals of the Galacticus project was to be able to reproduce the behavior of other semi-analytic models - and to do so not just approximately, but precisely and algorithmically (i.e. using precisely the same algorithms as those models). Why would you want to do this? Read on......

Semi-analytic models are complex - Galacticus is around 100KLOC (kilo-lines of code), not a huge software package by commercial standards, but more than big enough to mean it takes a lot of work to maintain and to understand all of its behavior. Given this complexity it's impossible to fully define such a model by describing the algorithms in journal articles - the traditional way of communicating how a calculation was performed. If done well, a description of this form can probably get close to fully defining a model, but there will always be details that are missed.

The best, and perhaps only, way to fully define the model then is through the source code - which is why Galacticus is Free and Open Source Software. Want to know how a specific calculation is implemented? Download the source and go look.

This problem is crucial because the scientific method rests on the idea of reproducibility of results - if you can't reproduce my results, then they're probably wrong. You could argue that any model which so complex that you can't reproduce its results from the published description in a journal isn't really a robust physical model. But that's not the point! Whether you want to run my code to run your own calculations, or instead want to see precisely what I did so you can check whether your own implementation of the same calculation gives the same result, the ultimate, precise, complete definition of the algorithm is the source code itself (and, perhaps, the relevant programming language specificiation).

While Galacticus is Free and Open Source Software, other semi-analytic codes are not. But Galacticus is highly modular, so in principle we can implement the algorithms of those other models in Galacticus and it should give the same results that they do. This would be very useful for understanding differences between models, and allowing us to test the reproducibility of those results.

The problem of course, is that typically all we have to go on is the published journal articles describing those other models....... So, reconstructing them with precision might be impossible, and will certainly be very difficult. But, let's try anyway!

Specifically, I've been working on recreating the Baugh et al. (2005) model - this has some features which differed significantly from what was available in Galacticus, making it sufficiently challenging. I'd love to have a video montage to insert here, showing me coding while inspiring music plays..... But I don't, so you'll have to imagine it instead.

The bottom line is that it took about three weeks to get this to work - the biggest challenges were introducing the concept of "halo formation events" into Galacticus, and in precisely matching the adiabatic contraction algorithm (crucial since it determines the rotation speeds of galaxies, and star formation rates are proportional to the third power of this in the Baugh et al. model).  But, once those were done correctly, the results are very good. Here's a simple example showing, as a function of redshift, the mass density in galaxies and in stars in the original Baugh et al. model and in the reconstructed version using Galacticus.
You can see that the match is very good - although not perfect. There remain some differences - which can be traced to the fundamentally different way in which the two models handle timestepping - but overall, it works! Other quantities (e.g. sizes, luminosities etc.) of galaxies also look good.

So, while it's not easy, it is possible to reconstruct other models using Galacticus. Hopefully I'll repeat this process for some other published models - so that we can compare them on an equal footing. For now, if you want to run the Baugh et al. model using Galacticus, a suitable input parameter file is available here - works with revision 510 or later of Galacticus v0.9.0.

Thursday, July 21, 2011

Accuracy and Semi-Analytic Modelling

I submitted an article today (you can find it on arXiv here) which explores how the accuracy of semi-analytic models of galaxy formation is limited by the input dark matter halo merger trees.

Often, merger trees are extracted from N-body simulations of large scale structure. The usual approach in such simulations has been to dump out all of the particle information at a number of "snapshot" times. These are then post-processed to find dark matter halos which are then linked together to make merger trees. Those merger trees are fed into semi-analytic models which populate them with galaxies.

An obvious question is, "How many snapshots should I take?". The equally obvious answer is, "As many as you can!". But, simulations are large, and storing data from many snapshots quickly uses vast amounts of storage space. So, the real question it "How many snapshots do I need to get an accurate answer?" Surprisingly, this question hasn't been answered. There are a few short statements in a few papers that more-or-less say "We checked that we have enough snapshots." but which don't give any details. But, beyond that, nothing.

Fortunately, Galacticus is ideally suited to answering this question. Using Galacticus, I was able to construct merger trees with effectively infinite time resolution, and then degrade these to appear as they would if they'd been extracted from an N-body simulation with a finite number of snapshots. I could then see how the average properties of galaxies changed as I gradually increased the number of snapshots used.

Quantitatively, the answers are that 128 or more snapshots is sufficient to get most average galaxy properties to an accuracy of 5%. Fewer snapshots and errors begin to increase rapidly. For reference, the much-utilized Millennium simulation used 64 snapshots - not too bad, but few enough to lead to 20-30% errors in some quantities.

You might argue that errors at this level aren't too important (certainly there are other uncertainties in the galaxy formation calculation that are more uncertain than this).I would agree that quantitatively, that's true. Of course, we didn't know that the errors were this small until the test was done! Perhaps more importantly, checking for convergence with respect to numerical parameters (such as the number of timesteps) is just good practice in any kind of computational physics and should always be done.

Another nice feature of Galacticus: because it's free, anyone can run similar tests themselves (example parameter files are available here), and figure out how many snapshots they should use before they run an expensive N-body simulation.

Sunday, July 10, 2011

Code Profiling and the Slowest Steps of Galaxy Formation


Galacticus isn't the fastest semi-analytic model of galaxy formation out there. It was never meant to be. Instead, the goal was to focus on flexibility and accuracy. Want it to go faster? Buy more computers!

Of course, speed is an issue though, so it's useful to know which parts of the calculation are the slowest. Building merger trees is actually very fast - it's a minor contributor to the overall run time. Solving the baryonic physics is the slow part. I regularly run Galacticus through valgrind to profile the code. As a result of this, many parts have been optimized and there's now no single part where additional optimization would result in a significant improvement in speed.

Instead, the limiting factor is the size and number of timesteps that the ODE solver must take to advance galaxies forward in time with the requested accuracy. I recently added some new functionality to Galacticus to track the sizes of timesteps taken the the ODE solver as it solves the physics of galaxy formation (*). Perhaps more importantly, it also figures out which properties of galaxies are responsible for limiting those timesteps - i.e. which cause the bottleneck in evolving galaxies forward in time.

Here's some typical output from the new ODE evolver meta-data collecting routines, showing the distribution of timesteps taken:


and which properties limit the timesteps:

The timestep distribution peaks around 10-3Gyr, which is a small fraction of typical galaxy dynamical times. It falls off rapidly below this, but has an extended tail to long times. From the histogram it's clear that it's the gas component of the spheroid that mostly limits the stepsize taken. Some exploration shows that this is due to disk instabilities.

An unstable disk can funnel gas (and stars) to the spheroid on a timescale as short as the disk dynamical time. The timescale for changing the gas mass of the spheroid in this way is therefore (Mspheroid/Mdisk) tdisk which can be arbitrarily short if the spheroid mass is low (as happens when a spheroid is created as a result of a disk instability event. Even with sensible absolute tolerance limits set on the spheroid evolution this can require some short timesteps to accurately track the evolution of the spheroid. Probably the best solution for this would be a better motivated model for disk instabilities.....

(* The ODE profiling code is available in the current development version of Galacticus, but is deactivated by default as it needs a hacked FGSL library to work. If you want to use it, let me know and I'll send you the modified FGSL.)