You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This PR attempts to lightly adapt the Lua 5.4.7 test suite to Luerl. I have gone through most (if not all) of the tests and commented out the tests that are not functional and added an appropriate print() noting that the test is non-functional and a TODO in the comments where applicable. As we've discussed in Discord, a lot of the failing tests may not make sense to fix in Luerl, but I do think I've identified a few a few genuine compatibility bugs here and there.
I don't expect this PR to get merged as-is, but I think it's a starting point for building something Luerl-specific. Maybe we could discuss an appropriate structure (eunit-based, perhaps?) for Luerl-specific tests, based off of the Lua test suite, that could run as part of the GitHub action, or separately. That's my thinking for the ultimate direction of this PR (or a new PR after closing this one).
Here's how I've been approaching each file:
D = code:priv_dir(luerl) ++ "/lua-5.4.7-tests/".
F = fun(File) ->
case luerl:dofile(D ++ File, luerl:init()) of
{ok, _, _} -> ok;
{lua_error, E, _St} -> E
end
end.
Which gives an output like this:
F("math.lua").
testing numbers and math lib
NOT integer testing overflow properties
NOT testing NaN
testing floating point precision limit
64-bit integers, 53-bit (mantissa) floats
testing types of integers and floats
testing basic float notation
string string to number coercion
testing minus zero as table key
testing modf
NOT testing modf with NaN (division by zero)
testing the size of positive and negative math.huge
testing integer arithmetic for max/min integers
...and so on
Where possible, I've tried to describe the test in print() if an existing comment didn't exist -- some of these may not be accurate. I've also annotated some of the nonfunctioning tests with a TODO comment, which you can grep through:
$ grep -B1 "TODO" *.lua
# ...
math.lua-print('NOT testing negative exponents')
math.lua:-- TODO: {badarith,'^',[0,-3]}
--
math.lua-print("NOT testing precision of module for large numbers")
math.lua:--TODO: shell hang
--
math.lua-print("NOT testing return of randomseed()")
math.lua:--TODO: {badarg,randomseed,[]}
--
math.lua-print("NOT testing random for floats")
math.lua:--TODO: relies on goto behavior
# ...
Broadly summarizing the categories of non-functional tests:
Unimplemented features. This includes goto, coroutine, etc. Some of the tests also (as you might expect) depend on functionality that hasn't been implemented in debug or io, for instance. I don't see these as particularly urgent.
Shell hangs, evidenced by the function not returning and my CPU/RAM going to 100% utilization. I am not sure how many of these are unbounded vs very heavy resource consumption. Some of the tests in heavy.lua, for instance, hung up the shell for a bit, but were OOM-killed on a machine with 32GB RAM. Other tests unexpectedly hung the shell for reasons I couldn't understand. I've tried to note these in TODOs where I remembered to do so :)
Assertion failures due to mismatched error messages between Lua and Luerl. The Lua test suite does a lot of string matching on error messages which don't match in Luerl. There are a lot of these, and personally I would not prioritize them.
Assertion failures due to implementation differences or gaps. These seem like the important ones to consider changing to match PUC Lua if possible.
goto and coroutines have not been implemented but this I have mentioned in places. I don't see these as interesting 😉
The debug module is very dependant on the implementation of the VM so many of these cannot be done.
The shell hanging is interesting and I will check it.
I will to try and understand the assertion failures and what mean.
What would be the best way to help with the assertion failures? Would it be useful to have some separate issues created with more detailed side-by-side comparisons of Luerl and Lua5.4 behavior? Or perhaps rewriting the failing assertions in a more Erlang-y way with eunit? Aside from coroutines, goto, etc, Do you have particular areas of compatibility differences that you are more or less interested in?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR attempts to lightly adapt the Lua 5.4.7 test suite to Luerl. I have gone through most (if not all) of the tests and commented out the tests that are not functional and added an appropriate print() noting that the test is non-functional and a TODO in the comments where applicable. As we've discussed in Discord, a lot of the failing tests may not make sense to fix in Luerl, but I do think I've identified a few a few genuine compatibility bugs here and there.
I don't expect this PR to get merged as-is, but I think it's a starting point for building something Luerl-specific. Maybe we could discuss an appropriate structure (eunit-based, perhaps?) for Luerl-specific tests, based off of the Lua test suite, that could run as part of the GitHub action, or separately. That's my thinking for the ultimate direction of this PR (or a new PR after closing this one).
Here's how I've been approaching each file:
Which gives an output like this:
Where possible, I've tried to describe the test in print() if an existing comment didn't exist -- some of these may not be accurate. I've also annotated some of the nonfunctioning tests with a TODO comment, which you can grep through:
Broadly summarizing the categories of non-functional tests:
goto,coroutine, etc. Some of the tests also (as you might expect) depend on functionality that hasn't been implemented indebugorio, for instance. I don't see these as particularly urgent.heavy.lua, for instance, hung up the shell for a bit, but were OOM-killed on a machine with 32GB RAM. Other tests unexpectedly hung the shell for reasons I couldn't understand. I've tried to note these in TODOs where I remembered to do so :)