Skip to content

Measurement Method Is Not Accurate #9

@Alois-xx

Description

@Alois-xx

I was wondering why in your tests Revenj is so much better. Especially since I know Jil is near the optimum what can be done based on my measurements here:

D:\Source\git\SerializerTests\bin\Release\net472>SerializerTests.exe -test combined -N 3000000 -serializer Jil,Revenj
Serializer      Objects "Time to serialize in s"        "Time to deserialize in s"      "Size in bytes" FileVersion     Framework       ProjectHome     DataFormat      FormatDetails   Supports Versioning
Revenj  3000000 2.8786  4.409   111777835       1.5.1.0 .NET Framework 4.7.3416.0       https://github.com/ngs-doo/revenj       Text    Json    Yes
Jil     3000000 0.6261  1.325   111777803       2.17.0.0        .NET Framework 4.7.3416.0       https://github.com/kevin-montrose/Jil   Text    Json    Yes

According to my tests JIL is at least 4 times faster. Then I did take a look at your test suite and found that you use factory delegates although at a first look everything points towards to the same factory

Func<int, SmallObjects.Message> factory1 = i => Models.Small.Message.Factory<SmallObjects.Message>(i);
Func<int, Models.Small.Message> factory1 = i => Models.Small.Message.Factory<Models.Small.Message>(i);

This factory for uses dynamic for some reason which looks strange but ok:

		public static T Factory<T>(int i) where T : new()
		{
			dynamic instance = new T();
			instance.message = "some message " + i;
			instance.version = i;
			return instance;
		}

After pulling out the heavy stuff like Intels VTune I was checking if your micro benchmark shows differences in Cache Level behavior or other exotic things. It turns out it is much simpler. You are creating entirely different objects which allocate a different amount of data because Models.Small.Message is the type which contains the factory which is used for JIL and other things but SmallObjects.Message is a different object defined in Revenj.Serialization.dll which contains generated code.
That is a little cheating here because you will win every startup measurement with pregenerated code which is not the most fair comparison. But anyway you are faster that is ok.

Now I did take the liberty to add to your test Revenj 1.5 from nuget and tested your serializer with the same data object and for serialize Jil is indeed nearly two times faster than RevenJ.

D:\>D:\Source\git\json-benchmark\Benchmark\bin\Release\JsonBenchmark.exe RevenjJsonMinimal Small Serialization 5000000
duration = 6660
size = 257777780
invalid deserialization = 0

D:\>D:\Source\git\json-benchmark\Benchmark\bin\Release\JsonBenchmark.exe Jil Small Serialization 5000000
duration = 3621
size = 257777780
invalid deserialization = 0

If I include Serialize and Deseralize then RevenJ is over two times slower if the same data object is used and not some pregenerated code in conjunction with some serializer which has no Nuget package is used.

D:\>D:\Source\git\json-benchmark\Benchmark\bin\Release\JsonBenchmark.exe Jil Small Both 5000000
duration = 6914
size = 257777780
invalid deserialization = 0

D:\>D:\Source\git\json-benchmark\Benchmark\bin\Release\JsonBenchmark.exe RevenjJsonMinimal Small Both 5000000
duration = 16297
size = 257777780
invalid deserialization = 0

I can fully support your claim on your main page https://github.com/ngs-doo/json-benchmark

  • Almost everyone claims to be THE FASTEST

This also includes you.

Please update your test suite with a fair comparison of different serializers which leads to reproducible results. By the way Utf8Json is even faster also with your test suite.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions