00:15:15 Unstable branch on cbro.berotato.org updated to: 0.30-a0-360-g447f08457e (34) 00:54:26 Monster database of master branch on crawl.develz.org updated to: 0.30-a0-360-g447f08457e 01:31:29 Fork (bcrawl) on crawl.kelbi.org updated to: 0.23-a0-4850-gd9e8576752 04:22:01 Experimental (bcrawl) branch on underhound.eu updated to: 0.23-a0-4850-gd9e8576752 05:06:55 Unstable branch on crawl.akrasiac.org updated to: 0.30-a0-360-g447f084 (34) 06:47:14 <06a​dvil> a lot of stuff could just be written much more cleanly; even being able to write async def in a few cases would probably help with debugging too 06:52:39 <06a​dvil> e.g. save info stuff works by constructing this somewhat insane chain of callbacks that each checks the output of a process, that callback chain construction gets rewritten as something like just: await asyncio.gather(*[self.update_save_info(g) for g in config.games]) 06:53:59 <06a​dvil> some of this I could probably do with tornado coroutines at this point 06:55:36 <06a​dvil> though native coroutines are 8 years old at this point so that's a bit annoying 08:04:39 <06a​dvil> hm, and even with tornado coroutines I hit the problem that it's very hard to convert just part of any program to using coroutines 10:21:47 -!- sockthog- is now known as sockthot- 10:38:18 <06a​dvil> welp, how much time staring at this code has it been so far without me noticing that ttyrecs are opened in unbuffered write mode 11:16:07 <09g​ammafunk> do we not build up enough ttyrec data in a buffer that unbuffered mode is ideal? 11:21:15 03advil02 07* 0.30-a0-361-g30ec35652d: refactor: clean up TerminalRecorder a bit 10(3 hours ago, 2 files, 7+ 9-) 13https://github.com/crawl/crawl/commit/30ec35652d7f 11:21:15 03advil02 07* 0.30-a0-362-g0b7830ef10: feat: more manual timing logging 10(23 minutes ago, 4 files, 61+ 21-) 13https://github.com/crawl/crawl/commit/0b7830ef10e1 11:21:15 03advil02 07* 0.30-a0-363-g76810e16d9: fix: use buffered writing mode for ttyrecs 10(3 minutes ago, 3 files, 28+ 18-) 13https://github.com/crawl/crawl/commit/76810e16d90d 11:21:22 <06a​dvil> there's like a million tiny writes 11:22:34 <06a​dvil> the default buffer size is still not huge in comparison, it's like 4-5 typical keypresses maybe, but I think calling an unbuffered write on every single terminal write is not ideal 11:22:45 <06a​dvil> there's no buffering in TerminalRecorder if that's what you mean 11:23:05 <06a​dvil> it goes straight from a read from the terminal to a write call on the ttyrec file 11:29:55 <06a​dvil> when moving around, a single step is 12 distinct write calls with 2166 bytes total (where the default buffer size is about 8k) 11:31:45 <06a​dvil> (I do actually wonder if the crawl process terminal writes could be a bit better, I don't know why webtiles is getting its data from what visually is a single redraw in 6 chunks (+ 6 header writes)) 11:33:46 Unstable branch on crawl.kelbi.org updated to: 0.30-a0-363-g76810e16d9 (34) 11:34:54 03advil02 07* 0.30-a0-364-gf6b3ef1049: fix: remove an obsolete log message 10(2 minutes ago, 1 file, 1+ 2-) 13https://github.com/crawl/crawl/commit/f6b3ef104934 11:48:39 Unstable branch on crawl.kelbi.org updated to: 0.30-a0-364-gf6b3ef1049 (34) 12:07:49 04Build failed for 08master @ f6b3ef10 06https://github.com/crawl/crawl/actions/runs/3456444524 12:08:39 (flake) 12:23:16 Is it correct that executing a lua autopickup func is not atomic? I can't see why but the behavior sure seems that way 12:40:34 <06a​dvil> what do you mean by atomic? 12:44:22 Executed without other code executing - like if I sprinkle mprs() in my autopickup, it outputs as if there are concurrent calls going on 12:45:50 as in, if I have 4 mprs(), output might be "1, 2, 1, 3, 2, 4, 3, 4". Also, the item object can suddenly change mid-execution when Jiyva slimes eat items. That last one is the actual problematic case 12:46:41 I have a fix for that last one but not feeling familiar enough with the details to push a PR yet 12:51:08 better description of Jiyva issue: https://github.com/crawl/crawl/commit/8650e12f5031ebd1b36faf2ed5602016ed44eacb 12:53:58 <06a​dvil> in general calling into crawl code makes it very hard to predict what else might happen, including mprs themselves 12:56:10 <06a​dvil> if you don't call back into c++ code it's going to be atomic by that definition though 12:56:40 <06a​dvil> also ch_force_autopickup is called who knows when, so I wouldn't trust much about its timing 12:57:17 Ah ok that makes a lot more sense, thanks. Calling back to the c++ is definitely going on 12:57:28 <06a​dvil> for the jiyva one, I would find it very hard to evaluate without a complete set of steps to repro 12:58:02 <06a​dvil> that doesn't seem like something that should be able to happen even calling back into c++ 12:58:55 Totally hear you on that - I can make a simpler RC that will cause the issue and attach to a PR maybe 12:59:21 <06a​dvil> unless what's happening is that you're trying to save item info from inside the autopickup fn and reuse it outside of the autopickup fn? 12:59:28 The issue really is "crash on most dropped items" vs no problem. And no problems without Jiyva and lots of testing 12:59:33 <06a​dvil> or across calls to the autopikcup fn 12:59:53 No it's all in one autopickup call 13:00:25 mpr(item.name()) literally changes from "falchion" to "orc corpse" in a single call when Jiyva is in the mix 13:00:49 ie multiple mpr() in on autopickup call 13:00:59 <06a​dvil> what is happening in between the mprs? 13:01:30 lots of calls to the crawl API - pretty normal stuff like it.weap_skill, it.ac, etc 13:02:14 It is complicated code and I get that stuff being suspect 13:02:50 But also, it's been tested a lot and this only happens with Jiyva, and making a copy of item instead of using the reference into env.item[] fixes it 13:03:42 I don't mean to bog you down with hypotheticals though - I can make a simpler RC to reproduce 13:08:07 <06a​dvil> making a copy may fix the immediate problem but it sounds to me like something else is going wrong; calling back into crawl may have side effects but it shouldn't cause time to pass in world_reacts (which is what it would take for slimes to actually eat items) 13:09:20 <06a​dvil> I suspect there might be a call to item_needs_autopickup at a place where it isn't safe to call lua with the item ref 13:17:11 -!- sockthot- is now known as sockthot 13:21:57 Makes sense - I'll start looking at what I'm using in the api. It'll be a long list but looks like item.name() alone doesn't cause the issue 15:58:30 <06a​dvil> does anyone know of the dgl_status file has anything to do with dgamelaunch 15:58:35 <06a​dvil> *if 15:59:52 <06a​dvil> (as far as I can tell, it doesn't) 16:04:10 <05k​ate> i think it's for listing the information about a game in the dgl spectate menu? 16:05:12 <06a​dvil> dgl doesn't seem to use it? 16:05:22 <06a​dvil> or if it does (/can) I can't figure out how 16:05:54 <05k​ate> ah, in that case i probably can't help, heh 16:06:00 <05k​ate> i have no clue on dgl's inner workings, just had a vague recollection 16:06:20 <06a​dvil> granted I'm not entirely sure how that list is generated 16:06:35 <06a​dvil> but I'm pretty sure I moved dgl-status on cao at some point without changing anything for that and it still works 16:11:55 <06a​dvil> ahh I think that's what happens via inotify, which looks at crawl binary-generated where files 16:12:07 <06a​dvil> lol Note: Some of these commands will probably change names soon. 16:12:17 <06a​dvil> written in I'm gonna guess 2009 16:19:02 <06a​dvil> somehow mediated by inprogress dirs 16:19:26 <06a​dvil> although I seem to remember there's another daemon that has something to do with this 16:23:59 Unstable branch on underhound.eu updated to: 0.30-a0-364-gf6b3ef1049 (34) 17:20:27 <09g​ammafunk> @advil can I use l-crawl.cc's crawl_err_trace (aka crawl.err_trace in clua) to get a traceback on a clua function call? 17:26:44 <09g​ammafunk> cpp // Can be called from within a debugger to look at the current Lua // call stack. (Borrowed from ToME 3) void CLua::print_stack() 19:06:07 <06a​dvil> crashlogs use CLua::print_stack() 19:06:58 <06a​dvil> can't really say I know what either of these functions are doing, but that one looks more plausible to me? 19:14:00 <09g​ammafunk> I'm looking for a solution for getting a lua stack trace for qw under clua, and I don't think I can access ::print_stack() from clua itself 19:14:40 <09g​ammafunk> I need to spend some time looking into this eventually 19:15:26 <09g​ammafunk> there's a debug library in lua that has a traceback, which I think ::print_stack() may likewise be effectively using, but that library isn't available in clua itself 19:16:16 <06a​dvil> it's just not very obvious to me what crawl_err_trace actually does, though I think it needs an error of some kind to actually do it 19:17:02 <09g​ammafunk> yeah, I had similar difficulty understanding it. I did try to run a qw function through it, including one that generates an error, but couldn't get anything useful out of it 19:17:28 <09g​ammafunk> certainly doesn't help that its documentation seems to be cut off 19:17:55 <06a​dvil> haha yeah, what does it return??? 19:19:02 <06a​dvil> the one use of it in crawlcode is also somewhat cryptic 19:22:27 <09g​ammafunk> there's a use of it somewhere? I only see it being defined and then added to the clua lib 19:23:16 <09g​ammafunk> oh 19:23:18 <09g​ammafunk> I see it 19:23:55 dcssrhett1 (L7 OpEE) Crash caused by signal #6: Aborted (D:5) 23:35:00 Unstable branch on crawl.develz.org updated to: 0.30-a0-364-gf6b3ef1049 (34) 23:56:46 Windows builds of master branch on crawl.develz.org updated to: 0.30-a0-364-gf6b3ef1049