Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The API side of this library is pretty good. I've generally avoided it due to the compile time, and preferred rapidjson due to both better performance and compile time if I wanted a C++ API, or cJSON if a C API is fine and I really just want an extremely low compile time (lately I've been playing with a live coding C++ context for a game engine, as a side project, I use cJSON there; rapidjson on main project though) (cJSON's tradeoff is it sticks with a single global allocator that you can only customize in one location, and also seems to perf slightly lower on parse).

The cJSON's C API or rapidjson's somewhat less ergonomic API has felt fine because I usually have read / write be reflective and don't write that much manual JSON code.

Specifically for the case where compile time doesn't matter and I need expressive JSON object mutation in C++ (or some tradeoff thereof), I think this library seems good.

Another big feature (which I feel like it could be louder about -- it really is a big one!) is that you can also generate binary data in a bunch of formats (BSON, CBOR, MessagePack and UJBJSON) that have a node structure similar to JSON's -- from the same in-memory instances of this library's data type. That sort of thing is something I've desired for various reasons (smaller file types with potentially better network send time for asynchronous sending / downloads (for real-time multiplayer you still basically want to encode to something that doesn't embed field string names)). I do think I may end up doing it at one layer above in the engine level and just have a different backend other than cJSON etc. too though...



While I agree on you on compile times getting slower with nlohmann/json, I think that its performances are fairly adequate. I've been using it on the ESP32 in single core mode and it's still fast enough for everything. Unless you parse tens of millions of JSON objects per second, you won't really notice any difference between the various JSON libraries.

Ironically, cJSON is _worse_ in my use case due to it not supporting allocators at all, as you wrote. Nlohmann fully supports C++ allocators, so it's trivial to allocate on SPIRAM instead of the limited DRAM of the ESP32. Support for allocators is why I often tend to pick C++ for projects with heterogeneous kinds of memory.

Also, Nlohmann/JSON supports strong typed serialization and deserialization, which in my experience vastly reduces the amount of errors people tend to make when reading/writing data to JSON. I've found simpler to rewrite some critical code that was parsing very complex data using cJSON to Nlohmann/JSON than fixing up all the tiny errors the previous developer made while reading "raw" JSON, parsing numbers as enums, strings as other kinds of enums, and so on.


The tradeoff would be different in an embedded context, for sure. I don't know that cJSON being worse due to not supporting allocators is "ironic," it's just the tradeoff there. Compile time actually really matters to my usage (changing gameplay code and seeing results immediately). nlohmann json was actually increasing my compile time by nontrivial amounts (like a 50-100% increase).

Re: strong typed -- agreed. That's basically what I do with cJSON using a static reflection system in C++ (it recursively reads / writes structs and supports customization points). So it's kind of like using cJSON to give yourself a thin C++ layer. Agreed that a typed approach is more sensible than writing raw read / write code yourself (although that does make sense in some scenarios).


For what it's worth, cJSON does support a (global) allocator override, using cJSON_InitHooks().

Here's what I use on ESP32 to push allocation to SPIRAM: https://gist.github.com/cibomahto/a29b6662847e13c61b47a194fa...


Yes, but I don't want to change the allocator for _all_ of cJSON - we often have to use third party libraries which rely on cJSON, shipped as binary blobs, which haven't been tested with that. Nlohmann gives you the ability to pick what allocator you want per instance instead globally, which I greatly prefer.


Great to hear that you made good experiences with user-defined allocators. It would be great if you could provide a pointer to an example, because we always fall short in testing the allocator usage. So if you had a small example, you could really improve the status quo :)


Thanks a lot to you for writing such an awesome library! :)

This is briefly how I use C++ Allocators with the ESP32 and Nlohmann/JSON (GCC8, C++17 mode):

I have a series of "extmem" headers which define aliases for STL containers which use my allocator (ext::allocator). The allocator is a simple allocator that just uses IDF's heap_caps_malloc() to allocate memory on the SPIRAM of the ESP-WROVER SoC.

I then define in <extmem/json.hpp>:

    namespace ext {
        using json = nlohmann::basic_json<std::map, std::vector, ext::string, bool, long long, unsigned long long, double, ext::allocator, nlohmann::adl_serializer>;
    }
where `ext::string` is just `std::basic_string<char, std::char_traits<char>, ext::allocator<char>>`. In order to be able to define generic from/to_json functions in an ergonomic way, I had to reexport the following internal macros in a separate header:

    #define JSON_TEMPLATE_PARAMS \
        template<typename, typename, typename...> class ObjectType,   \
        template<typename, typename...> class ArrayType,              \
        class StringType, class BooleanType, class NumberIntegerType, \
        class NumberUnsignedType, class NumberFloatType,              \
        template<typename> class AllocatorType,                       \
        template<typename, typename = void> class JSONSerializer

    #define JSON_TEMPLATE template<JSON_TEMPLATE_PARAMS>

    #define GENERIC_JSON                                            \
        nlohmann::basic_json<ObjectType, ArrayType, StringType, BooleanType,             \
        NumberIntegerType, NumberUnsignedType, NumberFloatType,                \
        AllocatorType, JSONSerializer>
I am now able to just write stuff like the following:

    JSON_TEMPLATE
    inline void from_json(const GENERIC_JSON &j, my_type &t) {
        // ... 
    }

    JSON_TEMPLATE
    inline void to_json(GENERIC_JSON &j, const my_type &t) {
        // ...
    }
And it works fine with both nlohmann::json and ext::json.

In the rest of the code, everything stays the same; I simply use ext::json (and catch const ext::json::exception&) as if it were it's default version, and it works great. FYI, I'm currently using nlohmann/json v.3.9.1.


I now know what my next test will be for DAW JSON Link. I didn't even think of trying an ESP32 and the memory requirements are a small amount of stack space and the data structures being serialized or the resulting output buffer or string.

NM. Looks like it's gcc8 which doesn't fully support C++17


In my experience I've yet to find a C++17 feature I need that GCC 8 didn't have. I've written crazy complicated C++17 on the ESP32 without too much worries (the true issues are always space and memory constraints).


I know it is hitting compiler bugs in gcc8 that I don’t know how to work around without major changes. Then again, it might be fine for most things, I'm just remembering trying it in CI.


The only few bugs I hit were SIGSEGV from the compiler - annoying, yes, but nothing too serious. After I switched some of my desktop projects to C++20 I've started seeing ICE crashes in Clang, GCC and MSVC alike. Always very pleasant, I must say.


This prompted me to relook and it wasn’t as bad as I remember gcc8.4.0. A couple minor changes and I am building now. Cool


Plusses: as a user of nlohmann-json, I also like the support for json pointer, json patch and json merge patch which comes in handy at times. I like their way of handling to/from_json (declare functions with appropriate signature and namespacing and the lib will pick them up seamlessly in implicit conversions). "standard" container facades are appreciated.

Minusses: although I'd like a way to append a (C-)array into a JSON-array "at once" and not iteratively (i.e., O(n) instead of O(n log n)). Also, lack of support for JSON schema is .. slightly annoying.


Can you elaborate on your minuses - I don't understand what you mean with "at once" in that context.


Yeah the support for pointer / patch etc. is a definite plus. The customization point thing I tend to do with my own customization points, but it's pretty good if a bunch of libraries settle on a standard customization point (eg. I think serde in Rust has achieved that a bit due to the ecosystem there) (definitely want it to be static).

I didn't realize that about the array complexity. Can you not just initialize a JSON array of N elements in O(N) (conceding a `.reserve(N)` style call beforehand if required)? rapidjson is pretty good about that sort of thing, cJSON's arrays are linked list so I basically think of its performance as at a different level and it's mostly about compile time for me.


I haven't found a way to do that. The only O(n) to-json-array way is using the initializer list constructor AFAICT.


For JSON schema, I've found that this third party library works well: https://github.com/pboettch/json-schema-validator




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: