mj's part time coding 2022 Q2
What
I propose to work for 3 months, spending 30 hours a week on Monero Core and Monero GUI, specifically on topics such as (in this order):
- reviewing the Monero Core and GUI code
- enabling and helping new developers
- providing more documentation for new devs
- CI fixes
- addressing user issues (whenever I can help)
- benchmarking tsqsim (although this one is arguable)
- regenerating and extending my Monero health report
- adding Monero-GUI to the health report
- general firefighting, whatever problems we face in near future
Why
Over the last 3 month period, I've been fully focused on developing my tsqsim tool for Monero Research Lab's OSPEAD project. Even though I did occasionally review new code in Monero Core and GUI, a few members noted that since I was being focused on the tool so much, they felt developer resources being dragged away from Core/GUI. I'd gladly take it as a compliment :>
The current state of tsqsim is "usable", but not yet perfect. To unleash its full potential, some more work has to be put in: I estimate ~2-4 months. However this can be scheduled for later (and half-time) as well, while the OSPEAD research could already start, based on the current state of tsqsim.
Therefore in the next 3 months, I'd like to catch up with the usual maintenance. Additionally, I'd like to continue enabling new devs, by pointing them to documentation, explaining and extending it. Previously, I was helping new devs in the #monero-dev channel. Just recently I noticed, that there's quite a crowd awaiting directions in the Recruitment Matrix Channel, formed at the end of last year by @Rucknium (correct me if I'm wrong). I promised them, that I'd be available from March for either 1-on-1 sessions or to answer general questions in the channel.
Benchmarking tsqsim
A special sub-task of the quarter would be benchmarking the tsqsim, requested by @selsta and @bigbklynballs. Even though C and C++ remain the fastest languages (yielding only to Assembler), I'm of the opinion, that the USP of tsqsim is the ability of setting up controlled experiments, without the need of them to be coded by the Researcher. This fact will be reflected by the benchmark, or more generally then: a comparison. While the user @bigbklynballs suggested benchmarking tsqsim against all of his proposed 10 alternatives, which were:
- https://github.com/statsmodels/statsmodels
- https://github.com/rapidsai/cuml
- https://github.com/h2oai/h2o4gpu
- https://github.com/alkaline-ml/pmdarima
- https://github.com/timeseriesAI/tsai
- https://github.com/facebookresearch/Kats
- https://github.com/unit8co/darts
- https://github.com/winedarksea/AutoTS
- https://github.com/alan-turing-institute/sktime
- https://github.com/linkedin/greykite
, I'll spare the Community's funds by restricting the benchmarking process to 1 or 2 of the above tools and then ask for further wishes.
Who
mj, I have been contributing to Monero-core since 2020. Here is a list of my previous work, all related to Monero, even if it got upstreamed.
Previous reports
Here is a list of the previous reports, that describe my completed or started tasks in more detail:
Previous CCS Proposal Postponed CCS Proposal (tsqsim)
Proposal
I will spend 30 hours a week on Monero for the next 3 month period, starting from 1st March.
I propose a wage of 45 €/h for 3 months. As of 01.03.2022 the average between the opening and closing price of XMR/EUR was at (159.850 + 151.990)/2 = 155.92 € according to investing.com. This would make a total of: 45 €/h * 30 h/week * 4 weeks * 3 months / 155.92 XMR/EUR = 103.899 XMR. Rounded down to be divisible by 3 -> 102 XMR.
Cheers!
Expiration date
30 Jun, 2022
Merge request reports
Activity
mentioned in merge request !283 (closed)
I don't work for free. That's for sure. I have bills to pay and have got a family to feed. Something that is our of reach for you maybe.
Now if I were only about money, I'd already be in Bitcoin for a long time.
If you're so jealous about money that I get, why don't you just start your own proposal? I'm only trying to help you.
Edited by mj
Thanks. I also think it's important. There are however some subtle limits to how many of my maintenance PRs the other devs can digest at a given time, and so is the opposite: how many foreign PRs at a given time I'm qualified to review with my current knowledge. Therefore I think it's always healthy for everybody to switch from one major quarterly task to another one.
Edited by mj
@mj you have two proposals here, which seems unintended based on the description. Separately, I don't support bundling proposals.
@luigi1111 if you mean that proposal (tsqsim), then it's being replaced (postponed) exactly through this proposal. The reasons are explained in the comments.
Briefly: the tsqsim proposal is waiting for @Rucknium to catch up, and at the same time @selsta pointed out, that some maintenance / review tasks have accumulated in the last 3 months, when I was doing tsqsim almost full time. I hope that explains it and that I understood you well.
@mj ok, please close that one. I was referring to this proposal having two separate files (proposals) included.
added 2 commits
added 1 commit
- 66346922 - Update price to 1st March - the starting date
added 1 commit
- 09e0625d - Update price to 1st March - the starting date
This incompetent charlatan can't compare his own code produced after 9 years of "work".
Ok, what's about monero related code ?
During all previous "work" all "PRs" submitted by this individual contains only:
- removal of unused variables
- relocation of trivial functions
- review of trivial simple PRs
- "optimizations" less than 1%
- huge walls of texts as a cover smoke
Everybody who supports this proposal is as incompetent as this charlatan.
Season N+1, Episode 2: charlatan and 2nd try to obtain grant
Edited by w wHey! If your permanent influenzing in Monero has finally taught you anything, then it's the realization that in order to win an argument, you have to bring some solid reasons. Good, good... because I have been already waiting with my answers for this for so long, that I was starting to lose my hope.
This incompetent charlatan can't compare his own code produced after 9 years of "work".
Through these 9 years I was working alone on the backbones of
tsqsim
, so didn't have a need to gather experience in operating Gitlab, I only learned C++. My sincere apologies for delivering more content than the form. But come on... This is not my worst crime! Bear with me:- removal of unused variables - briefly: no, not only this
- relocation of trivial functions - same as above. There was indeed some relocation done in order to limit the number of recursively included headers, directly measured in my report by this statistics:
and by its final effect - the compilation time:
So the expected result was achieved: I reduced the compilation time. This is probably the most representative PR, where "just moving code around" reduced the compilation time by ~14-19%
-
review of trivial simple PRs - There are continuously so many trivial PRs to review and tasks to solve, that there's still no need for me to go deeper. Why am I almost the only one to even do this? Probably because of people like you, who send out a clear message, that you'd disrespect any person cleaning up your mess. But no matter what you think about it and how you treat me, I see it so, that this mess is just a byproduct of something much greater, that the more experienced Devs do. Cleaning up this byproduct reduces the overall noise and speeds up the other Devs' work. BTW, the Experienced Devs are paid accordingly more than me, and I find it fair.
-
huge walls of texts as a cover smoke - Believe me, I'd prefer writing more code and documentation, rather than reports. However I believe I owe these reports to my generous Donators, so that they know what they pay for and can decide if they want to keep financing this kind of work or not. Here's a Life Hack for you: ... if you don't like these reports, you don't have to read them!
-
"optimizations" less than 1% - In one of my latest PRs, I measured -1.5% using an objective method, that reduces the influence of I/O bottlenecks (dynamic linking on a RAM disk), while @reemuru measured -0.91%, but @jeffro256 measured -1.2%. It's hard to say that it's definitely less than 1%, but the most important thing here, is that the cost of this gain was minuscule. To achieve this gain, I only had to change three lines, giving a huge reward to cost ratio. In fact, you've just inspired me to create a new way of measuring this kind of improvements - I have to adjust the gain for the normalized cost. Assuming that the maximal accepted cost is 1%, and Monero has ~ 350000 lines of code, 1% of this number, the reference cost, is 3500 (changed lines of code). To achieve 1.5% gain, I changed 3 lines, which are just 0.085% of the number 3500. This means, that in order to adjust the gain for the normalized cost of 1%, I have to multiply the gain by 3500 / 3 = 1166. So 1.5% * 1166 = 1750% gain for the normalized cost of 1% of the total number of lines in the project.
From now on, I will be boldly using this statistic for my further such improvements, but I still have to thank you for inspiring me. This indeed is a great thing as for your 1st ever contribution to Monero. Thanks! I predict you'll have a bright future in politics.
The only thing left to do is to think about the appropriate name for the statistic. Does "WelfarePrussia Statistic" sound good to you?
@One-horse-wagon It's actually a very trivial. There was no Community Meeting on the 27th of February, as it would normally happen. Typically one day after these meetings, the approved proposals get merged. This one was shaped on the previous meeting, on the 13th of February, just not officially approved on the 27th. Since I know that these tasks were explicitly asked for, I carried on with the tasks from the 1st of March, assuming it would all go smoothly.
I actually carry some guilt for this myself, precisely for messing up the merge request itself. Now it's fixed, and @luigi1111 is already on it, after pointing out these mistakes to me.
So in one sentence: except for the usual trolling, everything is fine.
Edited by mj
Related to this, will link to your Reddit post about documenting how you do development in a specific IDE - utilizing CMake scripts.
That seems like it could be very useful for new developers - and quite possibly even if it's not in their chosen IDE, they will be able to still get some value from it.
It seems that VSCodium (the open source version of VSCode) was quite popular.
Did you settle on a decision for which to document?
Edited by john_r365Hey John,
The post helped me not only to pick the favorite IDE (VSCodium), but measure the spread of popularity of other IDEs as well. VSCodium belongs to the Open Source group, and we already have there the Code::Blocks documented. C::B is not only Open Source, but corporation-independent. Therefore adding VSCodium to the group of documented IDEs will completely fill up the group. Secondly, in the commercial group, CLion will be taken into account. Thirdly, possibly even earlier than CLion, I'd like to document QtCreator, as it represents the GUI development "group" (a group of its own, lol).
The other important conclusion was, that except maybe Eclipse CDT, we may safely ignore any other IDEs, which will spare resources.
Edited by mj
mentioned in commit d7240f33
Incompetent individual (with c++ 9+ years of "experience") can't even measure build time properly. The easiest task ever. That https://github.com/monero-project/monero/pull/7000 brings only 9% reduction (before 56m - after 49m) of total build time instead of 14-19%. But who cares about it if with 4 jobs parallel compilation will be 14m ? So instead of fixing any security holes, logic errors, bugs that affects users of monero this "experienced" human optimization things for future generation of incompetent developers of monero. Is it adequate for cryptocurrency project to prepare for the next wave of incompetent developers who can't even use parallel compilation, has only 1 core workstation, can't setup programming IDE ?
Edited by - -"Incompetent individual"
Please... just "Charlatan".
"can't even measure build time properly"
From the table under the link of PR 7000:
Prevous Current Compilation (798 times): Compilation (802 times): Parsing (frontend): 1728.2 s Parsing (frontend): 1488.6 s Codegen & opts (backend): 1739.0 s Codegen & opts (backend): 1587.8 168138 ms: portable_storage.h (included 82 times, avg 2050 ms) 136062 ms: portable_storage.h (included 83 times, avg 1639 ms) (1488.6 - 1728.2) / 1728.2 * 100 = -13.86% (136062 - 168138) / 168138 * 100 = -19.08%
Where's the mistake?
"So instead of fixing any security holes, logic errors, bugs that affects users of monero this "experienced" human optimization things for future generation of incompetent developers of monero."
I'm trying to fix everything that I have expertise in, and where the project lacks in my opinion, for the exactly the reason that everybody else is busy fixing ONLY the problems that you mentioned. That said, I did have some success in fixing a security hole here, and enabled code coverage for external repositories, that prevent the code from degrading by accident.
"But who cares about it if with 4 jobs parallel compilation will be 14m ? (...). Is it adequate for cryptocurrency project to prepare for the next wave of incompetent developers who can't even use parallel compilation, has only 1 core workstation, can't setup programming IDE ?"
I bet you have neither seen my Parallel Tests, nor Icecream integration PRs, where I leverage as many cores as possible? You see, I don't write it only for myself, but also for people with slower devices, like RPi. Here's where I was actively helping in adoption for this platform. Secondly, the side effect of the compilation time reduction is, that the RAM requirements per core decrease, as you can see on the images below, that come from my report. This means, that only after I'm done with this cleaning part, an RPi user can use all of the platform's cores, that could otherwise not be used, because of too low RAM amount on this platform for a parallel compilation of Monero.
http://cryptog.hopto.org/monero/health/img/mem.png
So as you can see, from combination of these PRs and the RPi story, I don't only think about myself, but about general user/developer base. Please imagine, that there are people born in countries much less privileged than yours, yet are still very talented, so they could contribute, if we reduce their technical barriers. Unless you want to tell me, that since they're poorer, it's not possible for them to be more talented than an average GitLab troll?
So now a question to you. If you tell me, that the PRs: Parallel Tests, and Icecream integration do exactly what you recommend me to do - just using more cores - then why don't you just do anything productive and review them, so that they can be merged? You can even get paid for it. Just start a proposal. There are really only 2 requirements:
- You have to have a lot of time
- You have to have a brain
Since I clearly see that you have much more than "just a lot of time", there's already a 50% chance, that you'll qualify.
Edited by mj
- https://github.com/monero-project/monero/pull/7000 - overstated result
problem: slow compilation
expected: use parallel compilation and solve more important problems of monero
suggested patch: relocation from portable_storage.h into portable_storage.cpp, overstated (9% vs 14-19%) result due to incorrect measurements (numbers from that table are wrong)
proper solution: solve more important tasks of monero and by the way decompose huge cpp objects into smaller ones in order to make compilation more efficient
problem: some el::Logger constructor isn't correct with nullptr as 2nd argument
expected: it's a modification of unreachable code. It's obvious from relevant test (https://github.com/amrayn/easyloggingpp/blob/master/test/logger-test.h#L34) or docs(https://github.com/amrayn/easyloggingpp#registering-new-loggers) that el::Logger shouldn't be used directly or even constructed with nullptr.
proposed patch: rethrow exception
proper solution: no changes
- https://github.com/monero-project/monero/pull/7643 - incorrect patch
problem: "-fprofile-arcs -ftest-coverage --coverage" aren't enabled for external/....
expected: identify the reason of missing flags, apply required changes
proposed patch: wrapping relevant code from CMakeLists.txt into function and add 3 function calls into CMakeLists.txt (old code did this), external/easylogging++/CMakeLists.txt (old code didn't do this, ok) and contrib/CMakeLists.txt (old code did this already, why this change?), all other modules in external are still unaffected (why?).
proper solution: reorder commands in CMakeLists.txt order to enable flags for external/... .
problem: concurrent execution of tests
expected: 3 lines of bash script
proposed patch: reinvented wheel in form of python script without any benefits
proper solution: 3 lines of bash script
- https://github.com/monero-project/monero/pull/7160 - useless and incomplete
problem: parallel compilation on many hosts
expected: general purpose task of distributed compilation should be addressed by wiki/docs specific to OS used by user
proposed patch: ubuntu specific shell cmds
proper solution: solve more important tasks of monero and by the way decompose huge cpp objects into smaller ones in order to make compilation more efficient
-
https://github.com/monero-project/monero/pull/7979 - superseded by better patch
-
RPi support is needed to help people from poor coutry to compile monero
problem: inefficient compilation of monero
expected: use parallel compilation and solve more important problems of monero
suggested: focus on RPi support
proper solution: solve more important tasks of monero and by the way decompose huge cpp objects into smaller ones in order to make compilation more efficient; incompetent people like you are the main obstacle for talented poor who can solve real problems of monero unlike you