Unit Tests

Testing our code in a systematic way

Matthew DeHaven

March 31, 2024

Course Home Page

Lecture Summary

  • What are unit tests?
  • Using unit tests in R
    • in packages
    • in projects

Unit Tests Overview

What is a unit test?

Unit tests…

  • take a unit (small piece) of code,
  • run it,
  • and test if the result matches what is expected.

Unit tests are a programming methodology/framework where each test runs on the smallest possible portion of the code, so the errors tell you exactly where something went wrong.

Why unit tests?

Remember back to the start of the course, we motivated writing programming scripts as a replacement for point-and-click software?

Unit tests take this one step farther.

  • When you merge two data.frames, you print them out to make sure the merge worked
  • Instead, you could write a unit test that expects a data.frame of a certain size (or maybe no NA values)

Unit Tests Formalize Current Habits

I want to emphasize this,

  • you already test your code all the time;
  • you just do it on the fly.

When you find yourself testing for the 3rd time by hand if one of you merges worked, you should consider writing a unit test for it instead.

Unit Tests for Packages

Unit tests are most often used for packages.

  • Each function you write should have at least one unit test.
    • verify the function gives the correct result for a few simple cases
    • make sure it gives errors on likely mispecified arguments
    • etc.

Unit Tests for Research Projects

But I think unit tests are incredibly useful for research projects as well:

  • Test if your data has missing observations
  • Test if your regression results are within a certain range
  • Test if your output plots are non-empty
  • etc.

You can then easily rerun all of your tests whenever you update your raw data, or change a step in the analysis.

Continuous Integration

This is a preview of the next lecture.

Github can run unit tests

  • every time you push a commit.

If one of your tests doesn’t pass, it sends you an email.

This is an example of what programmers call “Continuous Integration”, where your tests run as you develop the code.

testthat package

Install testthat

In R, the best package for unit tests is testthat created by—you guessed it–Hadley Wickham.

renv::install("testthat")

And then

library(testthat)
testthat::local_edition(3)

Notice that we set the “edition” of the package after loading it. This is because so many packages relied upon testthat edition 2 they couldn’t deprecate all the functions they wanted to change, so they made an edition 3 (which is what we will use).

Expecting Results

The basic element of testthat unit tests are the expect_ family of functions.

expect_equal(4, 4)

Which didn’t return anything. It only returns something on an error.

expect_equal(4, 3)
Error: 4 (`actual`) not equal to 3 (`expected`).

  `actual`: 4
`expected`: 3

And this is the goal of unit tests, throw a helpful error when the result isn’t expected.

Expecting TRUE/FALSE

A couple of very useful expectations are

expect_true(TRUE)
expect_false(3 > 4)

These work with logical conditions, which make it easy to write your own expectations.

mt_rows <- nrow(mtcars)
expect_true(mt_rows < 30)
Error: mt_rows < 30 is not TRUE

`actual`:   FALSE
`expected`: TRUE 

When you do this the error messages are less helpful, so it’s better to use a pre-built expect_() function if you can.

Expecting Identical

If you want to check if two numbers are equal, you can use,

expect_equal(1, 1L)

If you want to check if two numbers are exactly equal, you use

expect_identical(1, 1L)
Error: 1 (`actual`) not identical to 1L (`expected`).

`actual` is a double vector (1)
`expected` is an integer vector (1)

expect_equal() has a default tolerance of \(1.4901161 × 10^{-8}\). Which we can override to allow fuzzy equals.

expect_equal(0.9999, 1, tolerance = 0.001)

Expecting Types

It can be useful to expect a certain data type.

expect_type(2, "double")
expect_type("hello", "character")
expect_type("hello", "double")
Error: "hello" has type 'character', not 'double'.

You can also expect classes,

expect_s3_class(mtcars, "data.frame")
expect_s3_class(lm(Sepal.Width ~ Sepal.Length, iris), "lm")

Expecting Warnings and Errors

Sometimes you will want to expect an error.

expect_error(lm(Sepal.Width ~ Sepal.Length, data = mtcars))

Or you will want to expect a warning.

expect_warning(mean(c("a", "b", "c")))  ## mean() returns NA and a warning 

Testing Our Own Function

Let’s write a very basic function and some tests for it.

rmse <- function(actual, predicted) {
  rmse <- sqrt(mean((actual - predicted) ^ 2))
  return(rmse)
}

And some things we could test:

expect_equal(rmse(c(1,2,3), c(2,2,2)), 0.816, tolerance = 0.01)
expect_equal(rmse(c(0,0,0), c(10,10,10)), 10)
expect_true(is.na(rmse(c(1,2,NA), c(1,2,3))))

And for a test that does not pass:

expect_error(rmse(c(1,2), c(1,2,3)))
Error: `rmse(c(1, 2), c(1, 2, 3))` did not throw the expected error.

We’d expect an error when given vectors of different length, but R tries to fix this for us and duplicates values to make them the same length and just throws a warning.

Making a Unit Test

We now have a group of expectations we would like to run for our function.

Let’s make our first “unit” test.

test_that("rmse works for various vectors", {
  expect_equal(rmse(c(1,2,3), c(2,2,2)), 0.816, tolerance = 0.01)
  expect_equal(rmse(c(0,0,0), c(10,10,10)), 10)
  expect_true(is.na(rmse(c(1,2,NA), c(1,2,3))))
})
Test passed 

And we passed!

  • First Argument: “string” with a name for the test
  • Second Argument: {code} block with one or more expect_() functions

Making a Unit Test that Fails

Let’s go ahead and add our expectation that failed.

test_that("rmse works for various vectors", {
  expect_equal(rmse(c(1,2,3), c(2,2,2)), 0.816, tolerance = 0.01)
  expect_equal(rmse(c(0,0,0), c(10,10,10)), 10)
  expect_true(is.na(rmse(c(1,2,NA), c(1,2,3))))
  expect_error(rmse(c(1,2), c(1,2,3)))
})
-- Warning: rmse works for various vectors -------------------------------------
longer object length is not a multiple of shorter object length
Backtrace:
    x
 1. +-testthat::expect_error(rmse(c(1, 2), c(1, 2, 3)))
 2. | \-testthat:::quasi_capture(...) at testthat/R/expect-condition.R:126:5
 3. |   +-testthat (local) .capture(...) at testthat/R/quasi-label.R:54:3
 4. |   | \-base::withCallingHandlers(...) at testthat/R/deprec-condition.R:23:5
 5. |   \-rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) at testthat/R/quasi-label.R:54:3
 6. \-global rmse(c(1, 2), c(1, 2, 3)) at rlang/R/eval.R:96:3
 7.   \-base::mean((actual - predicted)^2)

-- Failure: rmse works for various vectors -------------------------------------
`rmse(c(1, 2), c(1, 2, 3))` did not throw an error.
Error:
! Test failed

Next, we will try to pass this test.

Fixing Our Function

Let’s fix our rmse() function to throw an error for mismatched vectors.

rmse <- function(actual, predicted) {
  if(length(actual) != length(predicted)) stop("Please pass vectors of the same length!")
  rmse <- sqrt(mean((actual - predicted) ^ 2))
  return(rmse)
}

Now we can rerun our test.

test_that("rmse works for various vectors", {
  expect_equal(rmse(c(1,2,3), c(2,2,2)), 0.816, tolerance = 0.01)
  expect_equal(rmse(c(0,0,0), c(10,10,10)), 10)
  expect_true(is.na(rmse(c(1,2,NA), c(1,2,3))))
  expect_error(rmse(c(1,2), c(1,2,3)))
})
Test passed 

🎉

testthat Overview

We have seen how to write individual unit tests using testthat package. - Each unit test consists of one or more expect_() functions

Now we will look at two ways to store and run all of our unit tests in…

  • a package
  • a project

We will look at a package first, as this is the most natural location for unit tests.

Unit Tests for a Package

Setting up Unit Testing for an R Package

We saw last week how you can use the tools from devtools and usethis to quickly create an R package.

For instance, you can create a package in the current folder using,

usethis::create_package(".")

Now, if we want to use testthat for our package, simply run

usethis::use_testthat(3) ## 3 sets the edition
✔ Adding 'testthat' to Suggests field in DESCRIPTION
✔ Adding '3' to Config/testthat/edition
✔ Creating 'tests/testthat/'
✔ Writing 'tests/testthat.R'
• Call `use_test()` to initialize a basic test file and open it for editing.

What use_testthat(3) Added

What files and folders were added to our package?

  • “tests/testthat/” folder
    • Where our tests will live
  • “tests/testthat.r”
    • auto-generated R script that will run all the tests we write
  • “DESCRIPTION” file
    • added testthat as a suggested package and sets edition number to 3

Adding a Function to our package

Before runnning any tests we need to add function to our package.

Let’s add the rmse() function we wrote earlier to the “R/” folder:

rmse.r
rmse <- function(actual, predicted) {
  if(length(actual) != length(predicted)) stop("Please pass vectors of the same length!")
  rmse <- sqrt(mean((actual - predicted) ^ 2))
  return(rmse)
}

Now that we have a function we can run the folloiwng in our terminal:

usethis::use_test("rmse")
✔ Writing 'tests/testthat/test-rmse.R'
• Modify 'tests/testthat/test-rmse.R'

This creates a new test file “test-rmse.r” and opens it for us to modify.

Modifying our test file

The test file comes with a basic test for us to edit.

test-rmse.r
test_that("multiplication works", {
  expect_equal(2 * 2, 4)
})

We want to change

  • the test “name”
  • the expect_() calls

so we can test our function rmse().

Our Test File

Let’s reuse the test we wrote earlier.

test-rmse.r
test_that("rmse works for various vectors", {
  expect_equal(rmse(c(1,2,3), c(2,2,2)), 0.816, tolerance = 0.01)
  expect_equal(rmse(c(0,0,0), c(10,10,10)), 10)
  expect_true(is.na(rmse(c(1,2,NA), c(1,2,3))))
  expect_error(rmse(c(1,2), c(1,2,3)))
})

The only difference from earlier is we have now saved our function in one file, our test in another, and both are part of an R pacakge.

Running Tests in Package Development

You now have a few options to run your package tests from the terminal.

  1. devtools::check()
  2. devtools::test()
  3. testthat::test_dir("tests")

For writing a package, options 1 and 2 are most useful.

Let’s see what the output of each looks like.

devtools::check() Output

devtools::check()
══ Documenting ════════════════════════════════════════════════════════════════════════════
ℹ Updating prepUnitTests documentation
ℹ Loading prepUnitTests

══ Building ═══════════════════════════════════════════════════════════════════════════════
Setting env vars:
• CFLAGS    : -Wall -pedantic -fdiagnostics-color=always
• CXXFLAGS  : -Wall -pedantic -fdiagnostics-color=always
• CXX11FLAGS: -Wall -pedantic -fdiagnostics-color=always
• CXX14FLAGS: -Wall -pedantic -fdiagnostics-color=always
• CXX17FLAGS: -Wall -pedantic -fdiagnostics-color=always
• CXX20FLAGS: -Wall -pedantic -fdiagnostics-color=always
── R CMD build ────────────────────────────────────────────────────────────────────────────
   checking for file ‘/Users/matthewdehaven/Research/Courses/course-applied-economics-analy✔  checking for file ‘/Users/matthewdehaven/Research/Courses/course-applied-economics-analysis-templates/prepUnitTests/DESCRIPTION’
─  preparing ‘prepUnitTests’:
✔  checking DESCRIPTION meta-information ...
─  checking for LF line-endings in source and make files and shell scripts
─  checking for empty or unneeded directories
   Removed empty directory ‘prepUnitTests/man’
─  building ‘prepUnitTests_0.0.0.9000.tar.gz’
   
══ Checking ═══════════════════════════════════════════════════════════════════════════════
Setting env vars:
• _R_CHECK_CRAN_INCOMING_REMOTE_               : FALSE
• _R_CHECK_CRAN_INCOMING_                      : FALSE
• _R_CHECK_FORCE_SUGGESTS_                     : FALSE
• _R_CHECK_PACKAGES_USED_IGNORE_UNUSED_IMPORTS_: FALSE
• NOT_CRAN                                     : true
── R CMD check ────────────────────────────────────────────────────────────────────────────
─  using log directory ‘/private/var/folders/wp/_szmdb513bxd6dqzmkgl5zrc0000gn/T/Rtmpc9QN1o/file2a624d0b73ee/prepUnitTests.Rcheck’
─  using R version 4.3.2 (2023-10-31)
─  using platform: aarch64-apple-darwin23.0.0 (64-bit)
─  R was compiled by
       Apple clang version 15.0.0 (clang-1500.0.40.1)
       GNU Fortran (Homebrew GCC 13.2.0) 13.2.0
─  running under: macOS Sonoma 14.2.1
─  using session charset: UTF-8
─  using options ‘--no-manual --as-cran’
✔  checking for file ‘prepUnitTests/DESCRIPTION’ ...
─  this is package ‘prepUnitTests’ version ‘0.0.0.9000’
─  package encoding: UTF-8
✔  checking package namespace information ...
✔  checking package dependencies (2s)
✔  checking if this is a source package ...
✔  checking if there is a namespace
✔  checking for executable files ...
✔  checking for hidden files and directories
✔  checking for portable file names
✔  checking for sufficient/correct file permissions ...
✔  checking serialization versions
✔  checking whether package ‘prepUnitTests’ can be installed (601ms)
✔  checking installed package size ...
✔  checking package directory ...
✔  checking for future file timestamps ...
✔  checking DESCRIPTION meta-information ...
✔  checking top-level files ...
✔  checking for left-over files
✔  checking index information
✔  checking package subdirectories ...
✔  checking R files for non-ASCII characters ...
✔  checking R files for syntax errors ...
✔  checking whether the package can be loaded ...
✔  checking whether the package can be loaded with stated dependencies ...
✔  checking whether the package can be unloaded cleanly ...
✔  checking whether the namespace can be loaded with stated dependencies ...
✔  checking whether the namespace can be unloaded cleanly ...
✔  checking loading without being on the library search path ...
✔  checking dependencies in R code ...
✔  checking S3 generic/method consistency ...
✔  checking replacement functions ...
✔  checking foreign function calls ...
✔  checking R code for possible problems (1.1s)
✔  checking for missing documentation entries ...
─  checking examples ... NONE
✔  checking for unstated dependencies in ‘tests’ ...
─  checking tests ...
✔  Running ‘testthat.R’ (351ms)
✔  checking for non-standard things in the check directory
✔  checking for detritus in the temp directory
   
   
── R CMD check results ────────────────────────────────────── prepUnitTests 0.0.0.9000 ────
Duration: 5.9s

0 errors ✔ | 0 warnings ✔ | 0 notes ✔

devtools::test() Output

devtools::test()
ℹ Testing prepUnitTests
✔ | F W  S  OK | Context
✔ |          4 | rmse                                                                                                     

══ Results ═════════════════════════════════════════════════════════════════════════
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 4 ]

Which gives us a nice formatting showing our successes.

devtools::check() will also include this summary if you do have any failing tests.

Adding a Failing Test

Let’s add another test to our rmse() function.

test-rmse.r
test_that("rmse works for various vectors", {
  expect_equal(rmse(c(1,2,3), c(2,2,2)), 0.816, tolerance = 0.01)
  expect_equal(rmse(c(0,0,0), c(10,10,10)), 10)
  expect_true(is.na(rmse(c(1,2,NA), c(1,2,3))))
  expect_error(rmse(c(1,2), c(1,2,3)))
})

test_that("rmse works for a fitted model", {
  m <- lm(Petal.Length ~ Petal.Width, data = iris)
  fit <- fitted(m)
  act <- iris$Petal.Length

  x <- rmse(fit, act)

  expect_type(x, "character")
  expect_lt(x, 0.01) ## Expect "less than"
})

Errors on Our Tests

Now if we run our tests…

devtools::test()
ℹ Testing prepUnitTests
✔ | F W  S  OK | Context
✖ | 2        4 | rmse                                                                      
───────────────────────────────────────────────────────────────────────────────────────────
Failure (test-rmse.R:15:3): rmse works for a fitted model
`x` has type 'double', not 'character'.

Failure (test-rmse.R:16:3): rmse works for a fitted model
`x` is not strictly less than 0.01. Difference: 0.465
───────────────────────────────────────────────────────────────────────────────────────────

══ Results ════════════════════════════════════════════════════════════════════════════════
── Failed tests ───────────────────────────────────────────────────────────────────────────
Failure (test-rmse.R:15:3): rmse works for a fitted model
`x` has type 'double', not 'character'.

Failure (test-rmse.R:16:3): rmse works for a fitted model
`x` is not strictly less than 0.01. Difference: 0.465

[ FAIL 2 | WARN 0 | SKIP 0 | PASS 4 ]

Fixing our Test

Having got some failures, we’d either fix our test or our function.

test-rmse.r
test_that("rmse works for various vectors", {
  expect_equal(rmse(c(1,2,3), c(2,2,2)), 0.816, tolerance = 0.01)
  expect_equal(rmse(c(0,0,0), c(10,10,10)), 10)
  expect_true(is.na(rmse(c(1,2,NA), c(1,2,3))))
  expect_error(rmse(c(1,2), c(1,2,3)))
})

test_that("rmse works for a fitted model", {
  m <- lm(Petal.Length ~ Petal.Width, data = iris)
  fit <- fitted(m)
  act <- iris$Petal.Length

  x <- rmse(fit, act)

  expect_type(x, "numeric")
  expect_lt(x, 0.5) ## Expect "less than"
})

Tests in a Package

Unit tests are well supported in R package development.

  • Construct them using usethis::use_test("testname")
  • run them with devtools::check() or devtools::test()
  • writing useful tests makes certain your functions behave as intended

Unit Tests in a Project

Setting up Unit Tests for a Project

Now imagine that we don’t want to make an R package, but instead have a project

  • i.e. just a folder (that we could store on Github)

We can still setup unit tests with the testthat package.

  • Add a folder called “tests” to the project.
  • Add your testing files in there (i.e. “test-rmse.r”)

Project Setup

We also need to have saved our rmse() function somewhere.

  • For now, let’s just put it in “code/rmse.r”
code/rmse.r
rmse <- function(actual, predicted) {
  if(length(actual) != length(predicted)) stop("Please pass vectors of the same length!")
  rmse <- sqrt(mean((actual - predicted) ^ 2))
  return(rmse)
}
tests/test-rmse.r
test_that("rmse works for various vectors", {
  expect_equal(rmse(c(1,2,3), c(2,2,2)), 0.816, tolerance = 0.01)
  expect_equal(rmse(c(0,0,0), c(10,10,10)), 10)
  expect_true(is.na(rmse(c(1,2,NA), c(1,2,3))))
  expect_error(rmse(c(1,2), c(1,2,3)))
})

test_that("rmse works for a fitted model", {
  m <- lm(Petal.Length ~ Petal.Width, data = iris)
  fit <- fitted(m)
  act <- iris$Petal.Length

  x <- rmse(fit, act)

  expect_type(x, "numeric")
  expect_lt(x, 0.5) ## Expect "less than"
})

Running our Tests

Now that there is not a package structure, we have to run our tests with

  • testthat::test_dir("tests")

But if we did that right now, we would get…

testthat::test_dir("tests")
✔ | F W  S  OK | Context
✖ | 2        0 | rmse                                                               
────────────────────────────────────────────────────────────────────────────────────
Error (test-rmse.r:2:3): rmse works for various vectors
Error in `rmse(c(1, 2, 3), c(2, 2, 2))`: could not find function "rmse"
Backtrace:
    ▆
 1. └─testthat::expect_equal(rmse(c(1, 2, 3), c(2, 2, 2)), 0.816, tolerance = 0.01) at test-rmse.r:2:3
 2.   └─testthat::quasi_label(enquo(object), label, arg = "object") at testthat/R/expect-equality.R:62:3
 3.     └─rlang::eval_bare(expr, quo_get_env(quo)) at testthat/R/quasi-label.R:45:3

Error (test-rmse.r:13:3): rmse works for a fitted model
Error in `rmse(fit, act)`: could not find function "rmse"
────────────────────────────────────────────────────────────────────────────────────

══ Results ═════════════════════════════════════════════════════════════════════════
── Failed tests ────────────────────────────────────────────────────────────────────
Error (test-rmse.r:2:3): rmse works for various vectors
Error in `rmse(c(1, 2, 3), c(2, 2, 2))`: could not find function "rmse"
Backtrace:
    ▆
 1. └─testthat::expect_equal(rmse(c(1, 2, 3), c(2, 2, 2)), 0.816, tolerance = 0.01) at test-rmse.r:2:3
 2.   └─testthat::quasi_label(enquo(object), label, arg = "object") at testthat/R/expect-equality.R:62:3
 3.     └─rlang::eval_bare(expr, quo_get_env(quo)) at testthat/R/quasi-label.R:45:3

Error (test-rmse.r:13:3): rmse works for a fitted model
Error in `rmse(fit, act)`: could not find function "rmse"

[ FAIL 2 | WARN 0 | SKIP 0 | PASS 0 ]
Error: Test failures

Sourcing Our Function First

Because we need to load our custom function first.

source("code/rmse.r")
testthat::test_dir("tests")
✔ | F W  S  OK | Context
✔ |          6 | rmse                                                               

══ Results ═════════════════════════════════════════════════════════════════════════
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 6 ]

And we pass all of our tests!

Sourcing Our Function in Our Tests

This is not recommended for packages, but in a project I would consider sourcing the rmse() function in our testing file.

tests/test-rmse.r
source("../code/rmse.r")

test_that("rmse works for various vectors", {
  expect_equal(rmse(c(1,2,3), c(2,2,2)), 0.816, tolerance = 0.01)
  expect_equal(rmse(c(0,0,0), c(10,10,10)), 10)
  expect_true(is.na(rmse(c(1,2,NA), c(1,2,3))))
  expect_error(rmse(c(1,2), c(1,2,3)))
})

...

We have to go up one folder because tests are run in the “tests” folder, so the relative path to the code folder requires the “..” to go up a folder first.

Now Running Project Tests

Now that the our unit test sources the rmse() function itself, we can simply run

testthat::test_dir("tests")
✔ | F W  S  OK | Context
✔ |          6 | rmse                                                               

══ Results ═════════════════════════════════════════════════════════════════════════
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 6 ]

I think for a project this is good.

  • Each test file is self-sufficient
  • Lets you run tests from the terminal or anytime during the project
  • But sometimes your tests will rely on the rest of your code running first

Testing Your Analysis

I think a good general form for a “main.R” script is:

main.r
# Restore renv environment (should happen automatically)
renv::restore()

## Data Cleaning
source("code/clean-data.r")
source("code/transform-data.r")

## Analysis
source("code/run-regressions.r")

## Figures & Tables
source("code/figures/make-scatter-plot.r")
source("code/tables/make-summ-table.r")
source("code/tables/make-reg-table.r")

## Run Tests
testthat::test_dir("tests")

This way, everytime you run “main.R” your tests will run at the end.

Data Science Project Tests

Here’s a list of things I would test in that project:

  • The raw data
    • dimensions
    • range of values (i.e. percent variables are in 0 - 100)
    • no missing values (or the exptected number of missings)
  • Same for “transformed” data (i.e. ready for analysis step)
  • The regression results
    • coefficients are what I expect
    • especially for any key result that I report in my paper
  • Figures and Tables
    • non-empty

Writing Data Science Tests

Hopefully you already see how you could write a test using testthat for each of those items.

Then, as your project evolves, you’ll notice whenver something changes from what you expect.

There is one more package that is useful for this situation:

  • testdat
  • test data validation

testdat package

Why Another Package?

testthat has many basic expect_() functions that you can use to write any custom test you want.

testdat has written custom expect_() functions that make it easier to validate data.

  • Basically, think of it as expect_() functions written for data.frames

Install and load the pacakge

renv::install("testdat")
library(testdat)

testdat example

Let’s make mtcars into a data.table.

mtdt <- data.table::data.table(mtcars, keep.rownames = TRUE)
mtdt
                     rn   mpg   cyl  disp    hp  drat    wt  qsec    vs    am
                 <char> <num> <num> <num> <num> <num> <num> <num> <num> <num>
 1:           Mazda RX4  21.0     6 160.0   110  3.90 2.620 16.46     0     1
 2:       Mazda RX4 Wag  21.0     6 160.0   110  3.90 2.875 17.02     0     1
 3:          Datsun 710  22.8     4 108.0    93  3.85 2.320 18.61     1     1
 4:      Hornet 4 Drive  21.4     6 258.0   110  3.08 3.215 19.44     1     0
 5:   Hornet Sportabout  18.7     8 360.0   175  3.15 3.440 17.02     0     0
 6:             Valiant  18.1     6 225.0   105  2.76 3.460 20.22     1     0
 7:          Duster 360  14.3     8 360.0   245  3.21 3.570 15.84     0     0
 8:           Merc 240D  24.4     4 146.7    62  3.69 3.190 20.00     1     0
 9:            Merc 230  22.8     4 140.8    95  3.92 3.150 22.90     1     0
10:            Merc 280  19.2     6 167.6   123  3.92 3.440 18.30     1     0
11:           Merc 280C  17.8     6 167.6   123  3.92 3.440 18.90     1     0
12:          Merc 450SE  16.4     8 275.8   180  3.07 4.070 17.40     0     0
13:          Merc 450SL  17.3     8 275.8   180  3.07 3.730 17.60     0     0
14:         Merc 450SLC  15.2     8 275.8   180  3.07 3.780 18.00     0     0
15:  Cadillac Fleetwood  10.4     8 472.0   205  2.93 5.250 17.98     0     0
16: Lincoln Continental  10.4     8 460.0   215  3.00 5.424 17.82     0     0
17:   Chrysler Imperial  14.7     8 440.0   230  3.23 5.345 17.42     0     0
18:            Fiat 128  32.4     4  78.7    66  4.08 2.200 19.47     1     1
19:         Honda Civic  30.4     4  75.7    52  4.93 1.615 18.52     1     1
20:      Toyota Corolla  33.9     4  71.1    65  4.22 1.835 19.90     1     1
21:       Toyota Corona  21.5     4 120.1    97  3.70 2.465 20.01     1     0
22:    Dodge Challenger  15.5     8 318.0   150  2.76 3.520 16.87     0     0
23:         AMC Javelin  15.2     8 304.0   150  3.15 3.435 17.30     0     0
24:          Camaro Z28  13.3     8 350.0   245  3.73 3.840 15.41     0     0
25:    Pontiac Firebird  19.2     8 400.0   175  3.08 3.845 17.05     0     0
26:           Fiat X1-9  27.3     4  79.0    66  4.08 1.935 18.90     1     1
27:       Porsche 914-2  26.0     4 120.3    91  4.43 2.140 16.70     0     1
28:        Lotus Europa  30.4     4  95.1   113  3.77 1.513 16.90     1     1
29:      Ford Pantera L  15.8     8 351.0   264  4.22 3.170 14.50     0     1
30:        Ferrari Dino  19.7     6 145.0   175  3.62 2.770 15.50     0     1
31:       Maserati Bora  15.0     8 301.0   335  3.54 3.570 14.60     0     1
32:          Volvo 142E  21.4     4 121.0   109  4.11 2.780 18.60     1     1
                     rn   mpg   cyl  disp    hp  drat    wt  qsec    vs    am
     gear  carb
    <num> <num>
 1:     4     4
 2:     4     4
 3:     4     1
 4:     3     1
 5:     3     2
 6:     3     1
 7:     3     4
 8:     4     2
 9:     4     2
10:     4     4
11:     4     4
12:     3     3
13:     3     3
14:     3     3
15:     3     4
16:     3     4
17:     3     4
18:     4     1
19:     4     2
20:     4     1
21:     3     1
22:     3     2
23:     3     2
24:     3     4
25:     3     2
26:     4     1
27:     5     2
28:     5     2
29:     5     4
30:     5     6
31:     5     8
32:     4     2
     gear  carb
expect_unique(data = mtdt, rn)

testdat failure example

If instead we check if “cyl” column gives unique values…

expect_unique(data = mtdt, cyl)
Error: `mtdt` has 32 duplicate records on variable `cyl`.
Filter: None

testdat Test Values

We can expect certain values using…

expect_values(data = mtdt, cyl, c(4, 6, 8))
expect_values(data = mtdt, cyl, c(4, 6, 8, 100))
expect_values(data = mtdt, cyl, c(4, 6))
Error: `mtdt` has 14 records failing value check on variable `cyl`.
Variable set: `cyl`
Filter: None
Arguments: `<dbl: 4, 6>, miss = <chr: NA, "">`

You can also test columns for character values c("A", "B", "C").

testdat Test Ranges of Values

And we can expect a range of values instead of specific ones…

expect_range(data = mtdt, mpg, 10, 50)
expect_range(data = mtdt, mpg, 10, 30)
Error: `mtdt` has 4 records failing range check on variable `mpg`.
Variable set: `mpg`
Filter: None
Arguments: `min = 10, max = 30`

testdat Test a Condition

And we can test an if-then condition…

expect_cond(data = mtdt, cyl == 6, mpg < 20)
Error: `mtdt` failed consistency check. 3 cases have `cyl == 6` but not `mpg < 20`.

testdat Overview

Many useful expect_ functions for working with data.frames.

Very useful in an economics research project setting.

Not as useful for testing a package (unless its a data package).

Summary

Lecture Summary

  • What are Unit Tests
  • Unit Tests in R
    • testthat
    • testdat
  • Unit Testing for Packages
  • Unit Testing for Projects

Coding Example

  • Using unit tests in a project