Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions build/tested.lua
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,22 @@ function tested.only(name, fn_or_options, fn)
table.insert(tested.tests, { name = name, fn = func, options = options, kind = "only" })
end

function tested.before(fn)
tested.before_fn = fn
end

function tested.after(fn)
tested.after_fn = fn
end

function tested.before_each(fn)
tested.before_each_fn = fn
end

function tested.after_each(fn)
tested.after_each_fn = fn
end

function tested.assert(assertion)
local errors = {}
if assertion.expected == nil then table.insert(errors, "'expected'") end
Expand Down Expand Up @@ -218,6 +234,10 @@ function tested:run(filename, options)
total_time = 0,
}

if tested.before_fn then
tested.before_fn()
end

for i, test in ipairs(self.tests) do

test_results.tests[i] = { assertion_results = {}, name = test.name }
Expand All @@ -229,6 +249,10 @@ function tested:run(filename, options)
test_results.tests[i].time = 0

else
if tested.before_each_fn then
tested.before_each_fn()
end

local assert_failed_count = 0
local total_assertions = 0

Expand Down Expand Up @@ -269,6 +293,10 @@ function tested:run(filename, options)


adjust_for_expected(test.options.expected, test_results.tests[i])

if tested.after_each_fn then
tested.after_each_fn()
end
end


Expand All @@ -277,6 +305,11 @@ function tested:run(filename, options)
if test_results.counts.failed == 0 and test_results.counts.invalid == 0 then
test_results.fully_tested = true
end

if tested.after_fn then
tested.after_fn()
end

return test_results
end

Expand Down
10 changes: 10 additions & 0 deletions build/tested/types.lua
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,16 @@ local types = {}
















Expand Down
10 changes: 9 additions & 1 deletion docs/api-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,22 @@
- ex: `tested.test("luajit only", {run_when=type(jit) == 'table'}, function())` - will only run when executing via LuaJIT

## Asserts
All the asserts in `tested` take in a table with a couple of values that should hopefully make debugging your unit tests. The `given` and `should` are [optional] text representations of what your unit test are doing. It can be useful to have text representations so you're not having to rely on the values alone. It's also nice if you're passing in a bunch of test files and use the filename in `given`, so that it appears in the output if something goes wrong.
All the asserts in `tested` take in a table with a couple of values that should hopefully make debugging your unit tests. The `given` and `should` are _optional_ text representations of what your unit test are doing. It can be useful to have text representations so you're not having to rely on the values alone. It's also nice if you're passing in a bunch of test files and use the filename in `given`, so that it appears in the output if something goes wrong.

- `tested.assert({given?: string, should?: string, expected, actual})`
- `tested.assert_truthy({given?: string, should?: string, actual})`
- `tested.assert_falsy({given?: string, should?: string, actual})`
- `tested.assert_throws_exception({given?: string, should?: string, expected?: any, actual: function()})`
- `expected` is also optional here, but if passed in, `tested` will check if it matches the error that comes back from the function. If `expected` is a `string`, it should match the exact string that is thrown in your error command.

## Test Lifecycle
All the lifecycle methods take in a function that will be executed at the corresponding time. For any skipped test, the `before_each` and `after_each` will not run.

- `tested.before(fn: function())` - executes before any test in a file run
- `tested.after(fn: function())` - executes after all the tests in a file have run
- `tested.before_each(fn: function())` - executes before each test
- `tested.after_each(fn: function())` - executes after each test

## How `tested` works (high level)
1. Recursively search through the `tests` folder (from where it's called) or the folders specfied [on the commandline](./cli.md#tested-base-command) looking for files with the suffix `_test.lua` (or `_test.tl`) and makes a list of them
2. Before running a test file, it notes which packages have been loaded.
Expand Down
2 changes: 1 addition & 1 deletion docs/roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Things that I am one day planning to add (in no particular order):
- [x] `run_when` for conditional running tests
- [ ] `retries` and (maybe) `retry_timeout` for automatically retrying failing tests
- [ ] tags for filtering
- [ ] Lifecycle management (`before`, `after`, `before_each`, `after_each`)
- [x] Lifecycle management (`before`, `after`, `before_each`, `after_each`)
- [ ] Table driven assertion (no more for loops around asserts!)
- [ ] Stubbing
- [ ] Mocking
Expand Down
161 changes: 97 additions & 64 deletions docs/unit-testing.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Unit Testing
`tested` as a framework, tries to let you _just write tests_. If you want multiple asserts in one test, go for it. Dynamically generate tests? No Problem! `tested` aims to be flexible enough to work with a wide variety of testing scenarios and philosophies.

## Testing tables
## Tests

`tested.assert` will also deep compare tables, and will generate a little summary of the differences as well as print out the expected and actual table.
Below is an example of basic test comparing two tables, `tested.assert` will deep compare the tables, and generate a little summary of the differences as well as print out the expected and actual table.

=== "Test"

Expand All @@ -15,15 +15,15 @@
scores = {10, 20, 30},
config = { debug = true, port = 8080, crazy_table = {"hello", "world"} }
}

local t2 = {
name = 'Bob',
age = 30,
scores = {10, 25, 30},
config = { debug = false, port = 8080 },
email = 'bob@example.com'
}

tested.assert({
given = "a basic table",
should = "not be the same as the other table",
Expand Down Expand Up @@ -55,7 +55,7 @@
name = "Bob",
scores = { 10, 25, 30 }
}

Expected:
{
age = 30,
Expand Down Expand Up @@ -90,63 +90,8 @@ tested.test("tables with self-cycles, but the same structure should be equal", f
end)
```

## Truthy/Falsy tests

Sometimes in Lua you want to check if _anything_ returned (like a `string.match` or that a value exists in a table), we've added in an `assert_truthy` and `assert_falsy` to help out in those cases.

We would recommend if you're looking for explicitly looking for `true` or `false`, maybe stick with the regular `assert` so your tests are more semantically correct, but if checking "exists" and "not exists", `assert_truthy` and `assert_falsy` are good candidates.

```lua
tested.test("truthy", function()
tested.assert_truthy({given="empty string", actual=""})
tested.assert_truthy({given="a number", actual=0})
tested.assert_truthy({given="a function", actual=function() end})
tested.assert_truthy({given="a table", actual={}})
tested.assert_truthy({given="an unpack", actual=table.unpack({"a", "b"})})
tested.assert_truthy({given="true boolean", actual=true})
tested.assert_truthy({given="not false", actual=not false})
tested.assert_truthy({given="not nil", actual=not nil})
tested.assert_truthy({given="string.find he in hello", actual=string.find("hello", "he")})
end)

tested.test("falsy", function()
local b
tested.assert_falsy({given="nil", actual=nil})
tested.assert_falsy({given="false", actual=false})
tested.assert_falsy({given="unset variable", actual=b})
end)
```

## Testing exceptions
When writing assertions that check that an exception has been thrown, the `actual` should be a function taking no arguments, that when run raises an exception. `tested` also has the ability to capture an error (using `pcall` under the hood) and check if that returns as expected as well.

```lua
-- simple check that exception will be raised
tested.test("assert_throws_exception handles exception in assert", function()
tested.assert_throws_exception({
given = "an explicit error",
actual = function() error("gets raised, but handled!") end
})
end)

-- check that a specific exception was thrown
tested.test("example with exceptions and error checking", function()

-- will throw the specific exception in "expected" below
local function_that_throws = function()
local options = {loadFromString=true, headers=false, fieldsToKeep={1, 2}}
ftcsv.parse("apple>banana>carrot\ndiamond>emerald>pearl", ">", options)
end
tested.assert_throws_exception({
given="no headers and no renaming takes place",
expected="ftcsv: fieldsToKeep only works with header-less files when using the 'rename' functionality",
actual=function_that_throws
})
end)
```


## Skipping & Only tests
### Skipping & Only tests

For quick debugging purposes, there are `tested.skip` and `tested.only`. These allow you to quickly isolate testing when running selective tests a particular file. For things that are going to broken longer term, you should set the `expected` option.

Expand Down Expand Up @@ -179,7 +124,9 @@ end)

Both of these work on a _per-test file_ basis, so it may also be useful to pass the specific test file that you are working with to `tested` as well: `tested ./tests/file_with_only_test.lua`

## Options


## Test Options

### Conditional Skipping
If you want to _conditionally_ skip tests based on something that can be determined at runtime (LuaJIT, operating system, dependency present or not), there is the `run_when` options
Expand Down Expand Up @@ -222,10 +169,96 @@ end)



## Invalid tests
If a test file has a test that throws an unhandled exception, `tested` finds a test without any asserts, or a test with `expected` set returns without that result, they are considered "invalid", and will display as such in the results and will be listed in the summary as "invalid".
## Assertions

### Truthy/Falsy tests

Sometimes in Lua you want to check if _anything_ returned (like a `string.match` or that a value exists in a table), we've added in an `assert_truthy` and `assert_falsy` to help out in those cases.

We would recommend if you're looking for explicitly looking for `true` or `false`, maybe stick with the regular `assert` so your tests are more semantically correct, but if checking "exists" and "not exists", `assert_truthy` and `assert_falsy` are good candidates.

```lua
tested.test("truthy", function()
tested.assert_truthy({given="empty string", actual=""})
tested.assert_truthy({given="a number", actual=0})
tested.assert_truthy({given="a function", actual=function() end})
tested.assert_truthy({given="a table", actual={}})
tested.assert_truthy({given="an unpack", actual=table.unpack({"a", "b"})})
tested.assert_truthy({given="true boolean", actual=true})
tested.assert_truthy({given="not false", actual=not false})
tested.assert_truthy({given="not nil", actual=not nil})
tested.assert_truthy({given="string.find he in hello", actual=string.find("hello", "he")})
end)

tested.test("falsy", function()
local b
tested.assert_falsy({given="nil", actual=nil})
tested.assert_falsy({given="false", actual=false})
tested.assert_falsy({given="unset variable", actual=b})
end)
```

### Testing exceptions
When writing assertions that check that an exception has been thrown, the `actual` should be a function taking no arguments, that when run raises an exception. `tested` also has the ability to capture an error (using `pcall` under the hood) and check if that returns as expected as well.

```lua
-- simple check that exception will be raised
tested.test("assert_throws_exception handles exception in assert", function()
tested.assert_throws_exception({
given = "an explicit error",
actual = function() error("gets raised, but handled!") end
})
end)

-- check that a specific exception was thrown
tested.test("example with exceptions and error checking", function()

-- will throw the specific exception in "expected" below
local function_that_throws = function()
local options = {loadFromString=true, headers=false, fieldsToKeep={1, 2}}
ftcsv.parse("apple>banana>carrot\ndiamond>emerald>pearl", ">", options)
end
tested.assert_throws_exception({
given="no headers and no renaming takes place",
expected="ftcsv: fieldsToKeep only works with header-less files when using the 'rename' functionality",
actual=function_that_throws
})
end)
```

## Test Lifecycle
`tested` has support for a couple of test lifecycle methods. They allow you to register a function to run `before` any tests within the file have fun, `after` all tests have run, `before_each` test, and `after_each` test. If a test is skipped for any reason (`test.skip`, `run_when` is `false`, filtering, etc) the `before_each` and `after_each` will **not** be run. Test lifecycle hooks can be useful if you want to setup/teardown connections/services/configs, create or clean up temporary files, or even one day setup stubs and mocks!

Here's a simple example of what can be done:

```lua
local counts = { before = 0, after = 0, before_each = 0, after_each = 0 }

tested.before(function() counts.before = counts.before + 1 end)
tested.after(function() counts.after = counts.after + 1 end)
tested.before_each(function() counts.before_each = counts.before_each + 1 end)
tested.after_each(function() counts.after_each = counts.after_each + 1 end)

tested.test("before runs once before first test", function()
tested.assert({ given = "before count", should = "be 1", expected = 1, actual = counts.before })
tested.assert({ given = "after count", should = "be 0", expected = 0, actual = counts.after })
tested.assert({ given = "before_each count", should = "be 1", expected = 1, actual = counts.before_each })
tested.assert({ given = "after_each count", should = "be 0", expected = 0, actual = counts.after_each })
end)

tested.test("after_each runs after first test, before_each runs again", function()
tested.assert({ given = "before count", should = "still be 1", expected = 1, actual = counts.before })
tested.assert({ given = "after count", should = "still be 0", expected = 0, actual = counts.after })
tested.assert({ given = "before_each count", should = "be 2", expected = 2, actual = counts.before_each })
tested.assert({ given = "after_each count", should = "be 1", expected = 1, actual = counts.after_each })
end)

-- before_each and after_each will not run on skipped tests!
tested.test("this test is skipped", { run_when = false }, function() end)
```

## Invalid tests
If a test file has a test that throws an unhandled exception, `tested` finds a test without any asserts, or a test with `expected` set returns without that result, they are considered "invalid", and will display as such in the results and will be listed in the summary as "invalid".

<code class="highlight md-code__content md-typeset overflow-auto">
<pre>
Expand Down
33 changes: 33 additions & 0 deletions src/tested.tl
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,22 @@ function tested.only(name: string, fn_or_options: function() | types.TestedOptio
table.insert(tested.tests, {name=name, fn=func, options=options, kind="only"})
end

function tested.before(fn: function())
tested.before_fn = fn
end

function tested.after(fn: function())
tested.after_fn = fn
end

function tested.before_each(fn: function())
tested.before_each_fn = fn
end

function tested.after_each(fn: function())
tested.after_each_fn = fn
end

function tested.assert<T>(assertion: types.Assertion<T>): boolean, string
local errors = {}
if assertion.expected == nil then table.insert(errors, "'expected'") end
Expand Down Expand Up @@ -218,6 +234,10 @@ function tested:run(filename: string, options: types.TestRunnerOptions): types.T
total_time = 0
}

if tested.before_fn then
tested.before_fn()
end

for i, test in ipairs(self.tests) do

test_results.tests[i] = {assertion_results = {}, name = test.name}
Expand All @@ -229,6 +249,10 @@ function tested:run(filename: string, options: types.TestRunnerOptions): types.T
test_results.tests[i].time = 0

else
if tested.before_each_fn then
tested.before_each_fn()
end

local assert_failed_count = 0
local total_assertions = 0

Expand Down Expand Up @@ -269,6 +293,10 @@ function tested:run(filename: string, options: types.TestRunnerOptions): types.T

-- only adjust for tests that are run. Otherwise the skips when filtering or tested.skip will trigger
adjust_for_expected(test.options.expected, test_results.tests[i])

if tested.after_each_fn then
tested.after_each_fn()
end
end

-- always add up at the end!
Expand All @@ -277,6 +305,11 @@ function tested:run(filename: string, options: types.TestRunnerOptions): types.T
if test_results.counts.failed == 0 and test_results.counts.invalid == 0 then
test_results.fully_tested = true
end

if tested.after_fn then
tested.after_fn()
end

return test_results
end

Expand Down
Loading
Loading