2015-08-31

The stack build tool is a
cross-platform program for developing Haskell projects. It is aimed at
Haskellers both new and experienced. I recently put together an in-depth guide
to using stack for Haskell development.

The official home for this document is in the stack
repository.
Below is the full text of the guide at the time of writing this blog post. If
you have corrections or ideas for improvements, please send edits to the
Github
repository.

stack is a cross-platform program for developing Haskell projects. This guide
is intended to step a new stack user through all of the typical stack
workflows. This guide will not teach you Haskell, but will also not be looking
at much code. This guide will not presume prior experience with the Haskell
packaging system or other build tools.

What is stack?

stack is a modern build tool for Haskell code. It handles the management of
your toolchain (including GHC- the Glasgow Haskell Compiler- and- for Windows
users- MSYS), building and registering libraries, building build tool
dependencies, and much more. While stack can use existing tools on your system,
stack has the capability to be your one-stop shop for all Haskell tooling you
need. This guide will follow that approach.

What makes stack special? Its primary design point is reproducible builds.
The goal is that if you run stack build today, you'll get the same result
running stack build tomorrow. There are some exceptions to that rule (changes
in your operating system configuration, for example), but overall it follows
this design philosophy closely.

stack has also been designed from the ground up to be user friendly, with an
intuitive, discoverable command line interface. For many users, simply
downloading stack and reading stack --help will be enough to get up and
running. This guide is intended to provide a gradual learning process for users
who prefer that learning style.

Finally, stack is isolated: it will not make changes outside of specific
stack directories (described below). Do not be worried if you see comments like
"Installing GHC": stack will not tamper with your system packages at all.
Additionally, stack packages will not interfere with packages installed by
other build tools like cabal.

NOTE In this guide, I'll be running commands on a Linux system (Ubuntu 14.04,
64-bit) and sharing output from there. Output on other systems- or with
different versions of stack- will be slightly different. But all commands work
in a cross platform way, unless explicitly stated otherwise.

Downloading

There's a wiki page dedicated to downloading
stack which has the
most up-to-date information for a variety of operating systems, including
multiple Linux flavors. Instead of repeating that content here, please go check
out that page and come back here when you can successfully run stack
--version. The rest of this session will demonstrate the installation
procedure on a vanilla Ubuntu 14.04 machine.

That's it, stack is now up and running, and you're good to go. In addition,
it's a good idea- though not required- to set your PATH environment variable to
include $HOME/.local/bin:

Hello World

Now that we've got stack, it's time to put it to work. We'll start off with the
stack new command to create a new project. We'll call our project
helloworld, and we'll use the new-template project template:

You'll see a lot of output since this is your first stack command, and there's
quite a bit of initial setup it needs to do, such as downloading the list of
packages available upstream. Here's an example of what you may see, though your
exact results may vary. Over the course of this guide a lot of the content will
begin to make more sense:

Great, we now have a project in the helloworld directory. Let's go in there
and have some fun, using the most important stack command: build.

That was a bit anticlimactic. The problem is that stack needs GHC in order to
build your project, but we don't have one on our system yet. Instead of
automatically assuming you want it to download and install GHC for you, stack
asks you to do this as a separate command: setup. Our message here lets us
know that stack setup will need to install GHC version 7.10.2. Let's try that
out:

It doesn't come through in the output here, but you'll get intermediate
download percentage statistics while the download is occurring. This command
may take some time, depending on download speeds.

NOTE: GHC gets installed to a stack-specific directory, so calling ghc on the
command line won't work. See the stack exec, stack ghc, and stack runghc
commands below for more information.

But now that we've got GHC available, stack can build our project:

If you look closely at the output, you can see that it built both a library
called "helloworld" and an executable called "helloworld-exe". We'll explain in
the next section where this information is defined. For now, though, let's just
run our executable (which just outputs the string "someFunc"):

And finally, like all good software, helloworld actually has a test suite.
Let's run it with stack test:

Reading the output, you'll see that stack first builds the test suite and then
automatically runs it for us. For both the build and test command, already
built components are not built again. You can see this by running stack build
and stack test a second time:

In the next three subsections, we'll dissect a few details of this helloworld
example.

Files in helloworld

Before moving on with understanding stack a bit better, let's understand our project just a bit better.

The app/Main.hs, src/Lib.hs, and test/Spec.hs files are all Haskell
source files that compose the actual functionality of our project, and we won't
dwell on them too much. Similarly, the LICENSE file has no impact on the build,
but is there for informational/legal purposes only. That leaves Setup.hs,
helloworld.cabal, and stack.yaml.

The Setup.hs file is a component of the Cabal build system which stack uses.
It's technically not needed by stack, but it is still considered good practice
in the Haskell world to include it. The file we're using is straight
boilerplate:

Next, let's look at our stack.yaml file, which gives our project-level settings:

If you're familiar with YAML, you'll see that the flags and extra-deps keys
have empty values. We'll see more interesting usages for these fields later.
Let's focus on the other two fields. packages tells stack which local packages
to build. In our simple example, we have just a single package in our project,
located in the same directory, so '.' suffices. However, stack has powerful
support for multi-package projects, which we'll elaborate on as this guide
progresses.

The final field is resolver. This tells stack how to build your package:
which GHC version to use, versions of package dependencies, and so on. Our
value here says to use LTS Haskell version
3.2, which implies GHC 7.10.2 (which is why
stack setup installs that version of GHC). There are a number of values you
can use for resolver, which we'll talk about below.

The final file of import is helloworld.cabal. stack is built on top of the
Cabal build system. In Cabal, we have individual packages, each of which
contains a single .cabal file. The .cabal file can define 1 or more
components: a library, executables, test suites, and benchmarks. It also
specifies additional information such as library dependencies, default language
pragmas, and so on.

In this guide, we'll discuss the bare minimum necessary to understand how to
modify a .cabal file. The definitive reference on the .cabal file format is
available on
haskell.org.

The setup command

As we saw above, the setup command installed GHC for us. Just for kicks,
let's run setup a second time:

Thankfully, the command is smart enough to know not to perform an installation
twice. setup will take advantage of either the first GHC it finds on your PATH,
or a locally installed version. As the command output above indicates, you can
use stack path for quite a bit of path information (which we'll play with
more later). For now, we'll just look at where GHC is installed:

As you can see from that path, the installation is placed such that it will not
interfere with any other GHC installation, either system-wide, or even
different GHC versions installed by stack.

The build command

The build command is the heart and soul of stack. It is the engine that powers
building your code, testing it, getting dependencies, and more. Quite a bit of
the remainder of this guide will cover fun things you can do with build to get
more advanced behavior, such as building test and Haddocks at the same time, or
constantly rebuilding blocking on file changes.

But on a philosophical note: running the build command twice with the same
options and arguments should generally be a no-op (besides things like
rerunning test suites), and should in general produce a reproducible result
between different runs.

OK, enough talking about this simple example. Let's start making it a bit more
complicated!

Adding dependencies

Let's say we decide to modify our helloworld source a bit to use a new library,
perhaps the ubiquitous text package. For example:

When we try to build this, things don't go as expected:

Notice that it says "Could not find module." This means that the package
containing the module in question is not available. In order to tell stack that
you want to use text, you need to add it to your .cabal file. This can be done
in your build-depends section, and looks like this:

Now if we rerun stack build, we get a very different result:

What this output means is: the text package was downloaded, configured, built,
and locally installed. Once that was done, we moved on to building our local
package (helloworld). Notice that at no point do you need to ask stack to build
dependencies for you: it does so automatically.

extra-deps

Let's try a more off-the-beaten-track package: the joke
acme-missiles package. Our
source code is simple:

As expected, stack build will fail because the module is not available. But
if we add acme-missiles to the .cabal file, we get a new error message:

Notice that it says acme-missiles is "not present in build plan." This is the
next major topic to understand when using stack.

Curated package sets

Remember up above when stack new selected the lts-3.2 resolver for us? That's
what's defining our build plan, and available packages. When we tried using the
text package, it just worked, because it was part of the lts-3.2 package set.
acme-missiles, on the other hand, is not part of that package set, and
therefore building failed.

The first thing you're probably wondering is: how do I fix this? To do so,
we'll use another one of the fields in stack.yaml- extra-deps- which is used
to define extra dependencies not present in your resolver. With that change,
our stack.yaml looks like:

And as expected, stack build succeeds.

With that out of the way, let's dig a little bit more into these package sets, also known as snapshots. We mentioned lts-3.2, and you can get quite a bit of information about it at https://www.stackage.org/lts-3.2:

The appropriate resolver value (resolver: lts-3.2, as we used above)

The GHC version used

A full list of all packages available in this snapshot

The ability to perform a Hoogle search on the packages in this snapshot

A list of all modules in a snapshot, which an be useful when trying to determine which package to add to your .cabal file

You can also see a list of all available
snapshots. You'll notice two flavors: LTS
(standing for Long Term Support) and Nightly. You can read more about them on
the LTS Haskell Github page. If
you're not sure what to go with, start with LTS Haskell. That's what stack will
lean towards by default as well.

Resolvers and changing your compiler version

Now that we know a bit more about package sets, let's try putting that
knowledge to work. Instead of lts-3.2, let's change our stack.yaml file to use
nightly-2015-08-26. Rerunning
stack build will produce:

We can also change resolvers on the command line, which can be useful in a
Continuous Integration (CI) setting, like on Travis. For example:

When passed on the command line, you also get some additional "short-cut" versions of resolvers: --resolver nightly will use the newest Nightly resolver available, --resolver lts will use the newest LTS, and --resolver lts-2 will use the newest LTS in the 2.X series. The reason these are only available on the command line and not in your stack.yaml file is that using them:

Will slow your build down, since stack needs to download information on the
latest available LTS each time it builds

Produces unreliable results, since a build run today may proceed differently
tomorrow because of changes outside of your control.

Changing GHC versions

Finally, let's try using an older LTS snapshot. We'll use the newest 2.X
snapshot:

This fails, because GHC 7.8.4 (which lts-2.22 uses) is not available on our
system. The first lesson is: when you want to change your GHC version, modify
the resolver value. Now the question is: how do we get the right GHC version?
One answer is to use stack setup like we did above, this time with the
--resolver lts-2 option. However, there's another way worth mentioning: the
--install-ghc flag.

What's nice about --install-ghc is that:

You don't need to have an extra step in your build script

It only requires downloading the information on latest snapshots once

As mentioned above, the default behavior of stack is to not install new
versions of GHC automatically, to avoid surprising users with large
downloads/installs. This flag simply changes that default behavior.

Other resolver values

We've mentioned nightly-YYYY-MM-DD and lts-X.Y values for the resolver.
There are actually other options available, and the list will grow over time.
At the time of writing:

ghc-X.Y.Z, for requiring a specific GHC version but no additional packages

Experimental GHCJS support

Experimental custom snapshot support

The most up-to-date information can always be found on the stack.yaml wiki
page.

Existing projects

Alright, enough playing around with simple projects. Let's take an open source
package and try to build it. We'll be ambitious and use
yackage, a local package server
using Yesod. To get the code, we'll use the stack
unpack command:

This new directory does not have a stack.yaml file, so we need to make one
first. We could do it by hand, but let's be lazy instead with the stack init
command:

stack init does quite a few things for you behind the scenes:

Creates a list of snapshots that would be good candidates. The basic algorithm here is: prefer snapshots you've already built some packages for (to increase sharing of binary package databases, as we'll discuss later), prefer recent snapshots, and prefer LTS. These preferences can be tweaked with command line flags, see stack init --help.

Finds all of the .cabal files in your current directory and subdirectories (unless you use --ignore-subdirs) and determines the packages and versions they require

Finds a combination of snapshot and package flags that allows everything to compile

Assuming it finds a match, it will write your stack.yaml file, and everything
will be good. Given that LTS Haskell and Stackage Nightly have ~1400 of the
most common Haskell packages, this will often be enough. However, let's
simulate a failure by adding acme-missiles to our build-depends and re-initing:

stack has tested four different snapshots, and in every case discovered that
acme-missiles is not available. Also, when testing lts-2.22, it found that the
warp version provided was too old for yackage. The question is: what do we do
next?

The recommended approach is: pick a resolver, and fix the problem. Again,
following the advice mentioned above, default to LTS if you don't have a
preference. In this case, the newest LTS listed is lts-3.2. Let's pick that.
stack has told us the correct command to do this. We'll just remove our old
stack.yaml first and then run it:

As you may guess, stack build will now fail due to the missing acme-missiles.
Toward the end of the error message, it says the familiar:

If you're following along at home, try making the necessary stack.yaml
modification to get things building.

Alternative solution: dependency solving

There's another solution to the problem you may consider. At the very end of
the previous error message, it said:

This approach uses a full blown dependency solver to look at all upstream package versions available and compare them to your snapshot selection and version ranges in your .cabal file. In order to use this feature, you'll need the cabal executable available. Let's build that with:

Now we can use stack solver:

And if we're exceptionally lazy, we can ask stack to modify our stack.yaml file
for us:

With that change, stack build will now run.

NOTE: You should probably back up your stack.yaml before doing this, such as
committing to Git/Mercurial/Darcs.

There's one final approach to mention: skipping the snapshot entirely and just
using dependency solving. You can do this with the --solver flag to init.
This is not a commonly used workflow with stack, as you end up with a large
number of extra-deps, and no guarantee that the packages will compile together.
For those interested, however, the option is available. You need to make sure
you have both the ghc and cabal commands on your PATH. An easy way to do this
is to use the stack exec command:

The --no-ghc-package-path flag is described below, and is only needed due to a
bug in the currently
released stack. That bug is fixed in 0.1.4 and forward.

Different databases

Time to take a short break from hands-on examples and discuss a little
architecture. stack has the concept of multiple databases. A database
consists of a GHC package database (which contains the compiled version of a
library), executables, and a few other things as well. Just to give you an
idea:

Databases in stack are layered. For example, the database listing I just gave
is what we call a local database. This is layered on top of a snapshot
database, which contains the libraries and executables specified in the
snapshot itself. Finally, GHC itself ships with a number of libraries and
executables, which forms the global database. Just to give a quick idea of
this, we can look at the output of the ghc-pkg list command in our helloworld
project:

Notice that acme-missiles ends up in the local database. Anything which is
not installed from a snapshot ends up in the local database. This includes:
your own code, extra-deps, and in some cases even snapshot packages, if you
modify them in some way. The reason we have this structure is that:

it lets multiple projects reuse the same binary builds of many snapshot packages,

but doesn't allow different projects to "contaminate" each other by putting non-standard content into the shared snapshot database

Typically, the process by which a snapshot package is marked as modified is
referred to as "promoting to an extra-dep," meaning we treat it just like a
package in the extra-deps section. This happens for a variety of reasons,
including:

changing the version of the snapshot package

changing build flags

one of the packages that the package depends on has been promoted to an extra-dep

And as you probably guessed: there are multiple snapshot databases available, e.g.:

These databases don't get layered on top of each other, but are each used separately.

In reality, you'll rarely- if ever- interact directly with these databases, but
it's good to have a basic understanding of how they work so you can understand
why rebuilding may occur at different points.

The build synonyms

Let me show you a subset of the stack --help output:

It's important to note that four of these commands are just synonyms for the
build command. They are provided for convenience for common cases (e.g.,
stack test instead of stack build --test) and so that commonly expected
commands just work.

What's so special about these commands being synonyms? It allows us to make
much more composable command lines. For example, we can have a command that
builds executables, generates Haddock documentation (Haskell API-level docs),
and builds and runs your test suites, with:

You can even get more inventive as you learn about other flags. For example,
take the following:

This will:

turn on all warnings and errors

build your library and executables

generate Haddocks

build and run your test suite

run the command echo Yay, it succeeded when that completes

after building, watch for changes in the files used to build the project, and kick off a new build when done

install and copy-bins

It's worth calling out the behavior of the install command and --copy-bins
option, since this has confused a number of users, especially when compared to
behavior of other tools (e.g., cabal-install). The install command does
precisely one thing in addition to the build command: it copies any generated
executables to the local bin path. You may recognize the default value for that
path:

That's why the download page recommends adding that directory to your PATH
environment variable. This feature is convenient, because now you can simply
run executable-name in your shell instead of having to run stack exec
executable-name from inside your project directory.

Since it's such a point of confusion, let me list a number of things stack does
not do specially for the install command:

stack will always build any necessary dependencies for your code. The install command is not necessary to trigger this behavior. If you just want to build a project, run stack build.

stack will not track which files it's copied to your local bin path, nor provide a way to automatically delete them. There are many great tools out there for managing installation of binaries, and stack does not attempt to replace those.

stack will not necessarily be creating a relocatable executable. If your executables hard-codes paths, copying the executable will not change those hard-coded paths. At the time of writing, there's no way to change those kinds of paths with stack, but see issue #848 about --prefix for future plans.

That's really all there is to the install command: for the simplicity of what
it does, it occupies a much larger mental space than is warranted.

Targets, locals, and extra-deps

We haven't discussed this too much yet, but in addition to having a number of
synonyms, and taking a number of options on the command line, the build command
also takes many arguments. These are parsed in different ways, and can be used
to achieve a high level of flexibility in telling stack exactly what you want
to build.

We're not going to cover the full generality of these arguments here; instead,
there's a Wiki page covering the full build command
syntax.
Instead, let me point out a few different types of arguments:

You can specify a package name, e.g. stack build vector. This will attempt to build the vector package, whether it's a local package, in your extra-deps, in your snapshot, or just available upstream. If it's just available upstream but not included in your locals, extra-deps, or snapshot, the newest version is automatically promoted to an extra-dep.

You can also give a package identifier, which is a package name plus version, e.g. stack build yesod-bin-1.4.14. This is almost identical to specifying a package name, except it will (1) choose the given version instead of latest, and (2) error out if the given version conflicts with the version of a local package.

The most flexibility comes from specifying individual components, e.g. stack build helloworld:test:helloworld-test says "build the test suite component named helloworld-test from the helloworld package." In addition to this long form, you can also shorten it by skipping what type of component it is, e.g. stack build helloworld:helloworld-test, or even skip the package name entirely, e.g. stack build :helloworld-test.

Finally, you can specify individual directories to build, which will trigger building of any local packages included in those directories or subdirectories.

When you give no specific arguments on the command line (e.g., stack build),
it's the same as specifying the names of all of your local packages. If you
just want to build the package for the directory you're currently in, you can
use stack build ..

Components, --test, and --bench

Here's one final important yet subtle point. Consider our helloworld package,
which has a library component, an executable helloworld-exe, and a test suite
helloworld-test. When you run stack build helloworld, how does it know which
ones to build? By default, it will build the library (if any) and all of the
executables, but ignore the test suites and benchmarks.

This is where the --test and --bench flags come into play. If you use them,
those components will also be included. So stack build --test helloworld will
end up including the helloworld-test component as well.

You can bypass this implicit adding of components by being much more explicit,
and stating the components directly. For example, the following will not build
the helloworld-exe executable:

We first cleaned our project to clear old results so we know exactly what stack
is trying to do. Notice that it builds the helloworld-test test suite, and the
helloworld library (since it's used by the test suite), but it does not build
the helloworld-exe executable.

And now the final point: the last line shows that our command also runs the
test suite it just built. This may surprise some people who would expect tests
to only be run when using stack test, but this design decision is what allows
the stack build command to be as composable as it is (as described
previously). The same rule applies to benchmarks. To spell it out completely:

The --test and --bench flags simply state which components of a package should be built, if no explicit set of components is given

The default behavior for any test suite or benchmark component which has been built is to also run it

You can use the --no-run-tests and --no-run-benchmarks (from stack-0.1.4.0
and on) flags to disable running of these components. You can also use
--no-rerun-tests to prevent running a test suite which has already passed and
has not changed.

NOTE: stack doesn't build or run test suites and benchmarks for non-local
packages. This is done so that running a command like stack test doesn't need
to run 200 test suites!

Multi-package projects

Until now, everything we've done with stack has used a single-package project.
However, stack's power truly shines when you're working on multi-package
projects. All the functionality you'd expect to work just does: dependencies
between packages are detected and respected, dependencies of all packages are
just as one cohesive whole, and if anything fails to build, the build commands
exits appropriately.

Let's demonstrate this with the wai-app-static and yackage packages:

If you look at the stack.yaml, you'll see exactly what you'd expect:

Notice that multiple directories are listed in the packages key.

In addition to local directories, you can also refer to packages available in a
Git repository or in a tarball over HTTP/HTTPS. This can be useful for using a
modified version of a dependency that hasn't yet been released upstream. This
is a slightly more advanced usage that we won't go into detail with here, but
it's covered in the stack.yaml wiki
page.

Flags and GHC options

There are two common ways you may wish to alter how a package will install:
with Cabal flags and GHC options. In the stack.yaml file above, you can see
that stack init has detected that- for the yackage package- the upload flag
can be set to true, and for wai-app-static, the print flag to false. (The
reason it's chosen those values is that they're the default flag values, and
their dependencies are compatible with the snapshot we're using.)

In order to change this, we can use the command line --flag option:

This means: when compiling the yackage package, turn off the upload flag (thus
the -). Unlike other tools, stack is explicit about which package's flag you
want to change. It does this for two reasons:

There's no global meaning for Cabal flags, and therefore two packages can
use the same flag name for completely different things.

By following this approach, we can avoid unnecessarily recompiling snapshot
packages that happen to use a flag that we're using.

You can also change flag values on the command line for extra-dep and snapshot
packages. If you do this, that package will automatically be promoted to an
extra-dep, since the build plan is different than what the plan snapshot
definition would entail.

GHC options

GHC options follow a similar logic, with a few nuances to adjust for common use cases. Let's consider:

This will set the -Wall -Werror options for all local targets. The
important thing to note here is that it will not affect extra-dep and snapshot
packages at all. This is by design, once again, to get reproducible and fast
builds.

(By the way: that above GHC options have a special convenience flag: --pedantic.)

There's one extra nuance about command line GHC options. Since they only apply
to local targets, if you change your local targets, they will no longer apply
to other packages. Let's play around with an example from the wai repository,
which includes the wai and warp packages, the latter depending on the former.
If we run:

It will build all of the dependencies of wai, and then build wai with all
optimizations disabled. Now let's add in warp as well:

This builds the additional dependencies for warp, and then builds warp with
optimizations disabled. Importantly: it does not rebuild wai, since wai's
configuration has not been altered. Now the surprising case:

You may expect this to be a no-op: neither wai nor warp has changed. However,
stack will instead recompile wai with optimizations enabled again, and then
rebuild warp (with optimizations disabled) against this newly built wai. The
reason: reproducible builds. If we'd never built wai or warp before, trying to
build warp would necessitate building all of its dependencies, and it would do
so with default GHC options (optimizations enabled). This dependency would
include wai. So when we run:

We want its behavior to be unaffected by any previous build steps we took.
While this specific corner case does catch people by surprise, the overall goal
of reproducible builds is- in the stack maintainers' views- worth the
confusion.

Final point: if you have GHC options that you'll be regularly passing to your
packages, you can add them to your stack.yaml file (starting with
stack-0.1.4.0). See the wiki page section on
ghc-options
for more information.

path

NOTE: That's it, the heavy content of this guide is done! Everything from here
on out is simple explanations of commands. Congratulations!

Generally, you don't need to worry about where stack stores various files. But some people like to know this stuff. That's when the stack path command is useful.

In addition, this command accepts command line arguments to state which of
these keys you're interested in, which can be convenient for scripting. As a
simple example, let's find out which versions of GHC are installed locally:

(Yes, that command requires a *nix shell, and likely won't run on Windows.)

While we're talking about paths, it's worth explaining how to wipe our stack completely. It involves deleting just three things:

The stack executable itself

The stack root, e.g. $HOME/.stack on non-Windows systems. See stack path --global-stack-root

On Windows, you will also need to delete stack path --ghc-paths

Any local .stack-work directories inside a project

exec

We've already used stack exec used multiple times in this guide. As you've
likely already guessed, it allows you to run executables, but with a slightly
modified environment. In particular: it looks for executables on stack's bin
paths, and sets a few additional environment variables (like
GHC_PACKAGE_PATH, which tells GHC which package databases to use). If you
want to see exactly what the modified environment looks like, try:

The only trick is how to distinguish flags to be passed to stack versus those for the underlying program. Thanks to the optparse-applicative library, stack follows the Unix convention of -- to separate these, e.g.:

Flags worth mentioning:

--package foo can be used to force a package to be installed before running the given command

--no-ghc-package-path can be used to stop the GHC_PACKAGE_PATH environment variable from being set. Some tools- notably cabal-install- do not behave well with that variable set

ghci (the repl)

GHCi is the interactive GHC environment, a.k.a. the REPL. You can access it with:

However, this doesn't do anything particularly intelligent, such as loading up
locally written modules. For that reason, the stack ghci command is
available.

NOTE: At the time of writing, stack ghci was still an experimental feature,
so I'm not going to devote a lot more time to it. Future readers: feel free to
expand this!

ghc/runghc

You'll sometimes want to just compile (or run) a single Haskell source file,
instead of creating an entire Cabal package for it. You can use stack exec
ghc or stack exec runghc for that. As simple helpers, we also provide the
stack ghc and stack runghc commands, for these common cases.

stack also offers a very useful feature for running files: a script
interpreter. For too long have Haskellers felt shackled to bash or Python
because it's just too hard to create reusable source-only Haskell scripts.
stack attempts to solve that. An example will be easiest to understand:

If you're on Windows: you can run stack turtle.hs instead of ./turtle.hs.

The first line is the usual "shebang" to use stack as a script interpreter. The
second line, which is required, provides additional options to stack (due to
the common limitation of the "shebang" line only being allowed a single
argument). In this case, the options tell stack to use the lts-3.2 resolver,
automatically install GHC if it is not already installed, and ensure the turtle
package is available.

The first run can take a while, since it has to download GHC and build
dependencies. But subsequent runs are able to reuse everything already built,
and are therefore quite fast.

Finding project configs, and the implicit global

Whenever you run something with stack, it needs a stack.yaml project file. The
algorithm stack uses to find this is:

Check for a --stack-yaml option on the command line

Check for a STACK_YAML environment variable

Check the current directory and all ancestor directories for a stack.yaml file

The first two provide a convenient method for using an alternate configuration.
For example: stack build --stack-yaml stack-7.8.yaml can be used by your CI
system to check your code against GHC 7.8. Setting the STACK_YAML environment
variable can be convenient if you're going to be running commands like stack
ghc in other directories, but you want to use the configuration you defined in
a specific project.

If stack does not find a stack.yaml in any of the three specified locations,
the implicit global logic kicks in. You've probably noticed that phrase a few
times in the output from commands above. Implicit global is essentially a hack
to allow stack to be useful in a non-project setting. When no implicit global
config file exists, stack creates one for you with the latest LTS snapshot as
the resolver. This allows you to do things like:

compile individual files easily with stack ghc

build executables you'd want to use without starting a project, e.g. stack install pandoc

Keep in mind that there's nothing magical about this implicit global
configuration. It has no impact on projects at all, and every package you
install with it is put into isolated databases just like everywhere else. The
only magic is that it's the catch-all project whenever you're running stack
somewhere else.

stack.yaml vs .cabal files

Now that we've covered a lot of stack use cases, this quick summary of
stack.yaml vs .cabal files will hopefully make a lot of sense, and be a good
reminder for future uses of stack:

A project can have multiple packages. Each project has a stack.yaml. Each package has a .cabal file

The .cabal file specifies which packages are dependencies. The stack.yaml file specifies which packages are available to be used

.cabal specifies the components, modules, and build flags provided by a package

stack.yaml can override the flag settings for individual packages

stack.yaml specifies which packages to include

Comparison to other tools

stack is not the only tool around for building Haskell code. stack came into
existence due to limitations with some of the existing tools. If you're
unaffected by those limitations and are happily building Haskell code, you may
not need stack. If you're suffering from some of the common problems in other
tools, give stack a try instead.

If you're a new user who has no experience with other tools, you should start
with stack. The defaults match modern best practices in Haskell development,
and there are less corner cases you need to be aware of. You can develop
Haskell code with other tools, but you probably want to spend your time writing
code, not convincing a tool to do what you want.

Before jumping into the differences, let me clarify an important similarity:

Same package format. stack, cabal-install, and presumably all other tools share the same underlying Cabal package format, consisting of a .cabal file, modules, etc. This is a Good Thing: we can share the same set of upstream libraries, and collaboratively work on the same project with stack, cabal-install, and NixOS. In that sense, we're sharing the same ecosystem.

Now the differences:

Curation vs dependency solving as a default. stack defaults to using curation (Stackage snapshots, LTS Haskell, Nightly, etc) as a default instead of defaulting to dependency solving, as cabal-install does. This is just a default: as described above, stack can use dependency solving if desired, and cabal-install can use curation. However, most users will stick to the defaults. The stack team firmly believes that the majority of users want to simply ignore dependency resolution nightmares and get a valid build plan from day 1, which is why we've made this selection of default behavior.

Reproducible. stack goes to great lengths to ensure that stack build today does the same thing tomorrow. cabal-install does not: build plans can be affected by the presence of preinstalled packages, and running cabal update can cause a previously successful build to fail. With stack, changing the build plan is always an explicit decision.

Automatically building dependencies. In cabal-install, you need to use cabal install to trigger dependency building. This is somewhat necessary due to the previous point, since building dependencies can in some cases break existing installed packages. So for example, in stack, stack test does the same job as cabal install --run-tests, though the latter additionally performs an installation that you may not want. The closer command equivalent is cabal install --enable-tests --only-dependencies && cabal configure --enable-tests && cabal build && cabal test (newer versions of cabal-install may make this command shorter).

Isolated by default. This has been a pain point for new stack users actually. In cabal, the default behavior is a non-isolated build, meaning that working on two projects can cause the user package database to become corrupted. The

Show more