Selector table too big during Object.readArchive

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Selector table too big during Object.readArchive

alln4tural-list
hi list,
 
i get the following error when trying to read an archive that i've been adding stuff to over the last couple of weeks:
 
ERROR: Selector table too big: too many classes, method selectors or function definitions in this function. Simplify the function.
Next selector was: prUnarchive
  in file 'selected text'
  line 63672 char 11:
  prUnarchive(o,p);

-----------------------------------
-> 0
 
 
i've read up on "Selector table too big" in the list archive (it doesn't seem to be mentioned in the docs),
but feel my case is a bit different, i'm just using Object.writeArchive / .readArchive
and had no inkling i might run into some limitation.

The object is an array of arrays of Pbind key/value arrays, big (archive file is 2 MB) but not crazy big (i have others that are several times this size and havn't caused me any trouble).
The top array has about 50 elements (i'm guessing, because the previous version i saved has 49, and causes no trouble), but that, too, isnt some magic limit, because i have others with more.
After rooting around a bit, i feel it's not the size, but something else about the archive that renders it unreadable.
 
It's here, if anyone would care to have a look:
https://gist.github.com/alln4tural/95a7dfb95a8bb95e5d3f7ca9dd7cc92d
Here's the previous version, that _can_ be read in:
https://gist.github.com/alln4tural/ecb3201ef24578e67db31a1d0402b3c3

In general, though:
 
a) Does the error message above provide any insights into what might be wrong with the archive (assuming, as i do, that it's not its sheer size)?
 
b) Would an error be posted when writing an Archive that's too big (or otherwise faulty) to be read back in?
I've been creating a lot of archives, and this incident gives me a sinking feeling ..
 
c) Can some rules of thumb be formulated on how to avoid creating archives that cannot be read back in?
These would be a fine addition to the Object.writeArchive doc, which currently does not even hint at the possibility.
 
d) Is there any way to recover the contents of the archive, split it up in some way ..
(i've been having fun trying to reverse engineer the archive format, but it appears not to be a simple matter of deleting the first or last couple of entries ..)
 
thanks in advance for any ideas.
cheers,
eddi
--
https://soundcloud.com/all-n4tural
_______________________________________________ sc-users mailing list info (subscription, etc.): http://www.birmingham.ac.uk/facilities/ea-studios/research/supercollider/mailinglist.aspx archive: http://www.listarc.bham.ac.uk/marchives/sc-users/ search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Reply | Threaded
Open this post in threaded view
|

Re: Selector table too big during Object.readArchive

brianlheim
Hey eddi,

a) There's nothing wrong with the archive, except that it's too big. What you're experiencing is documented in Reference/Literals:

"A single function may contain no more than 256 selectors. If this limit is exceeded, a compiler error is printed:
ERROR: Selector table too big: too many classes, method selectors or function definitions in this function. Simplify the function."

The culprit here is the large number of key-value pairs, for instance:

```
// lines 19580-19583
    3260, [ source: nil,  pattern: 1, 
        envir: nil,  clock: nil, 
        quant: nil,  condition: true, 
        reset: nil ],
```

Each of those `selector: value` pairs counts toward that limit.

b) Yes, an error will be posted every time you exceed that limit.

c) Split up the thing you're trying to archive.

d) Yes, you can execute the file in parts. First evaluate the first 3000 lines of the file (`o = [ ... ];`), then evaluate the last 60000 lines of the file (`p = [ ... ];`), then execute the final statement (`prUnarchive(o,p);`). This is precisely what Object.readArchive does: it reads a file and executes all the code in it, returning the result. The interpreter just can't handle executing all of those things as a single instruction.

Regards,
Brian

On Wed, Mar 15, 2017 at 12:40 PM, <[hidden email]> wrote:
hi list,
 
i get the following error when trying to read an archive that i've been adding stuff to over the last couple of weeks:
 
ERROR: Selector table too big: too many classes, method selectors or function definitions in this function. Simplify the function.
Next selector was: prUnarchive
  in file 'selected text'
  line 63672 char 11:
  prUnarchive(o,p);

-----------------------------------
-> 0
 
 
i've read up on "Selector table too big" in the list archive (it doesn't seem to be mentioned in the docs),
but feel my case is a bit different, i'm just using Object.writeArchive / .readArchive
and had no inkling i might run into some limitation.

The object is an array of arrays of Pbind key/value arrays, big (archive file is 2 MB) but not crazy big (i have others that are several times this size and havn't caused me any trouble).
The top array has about 50 elements (i'm guessing, because the previous version i saved has 49, and causes no trouble), but that, too, isnt some magic limit, because i have others with more.
After rooting around a bit, i feel it's not the size, but something else about the archive that renders it unreadable.
 
It's here, if anyone would care to have a look:
https://gist.github.com/alln4tural/95a7dfb95a8bb95e5d3f7ca9dd7cc92d
Here's the previous version, that _can_ be read in:
https://gist.github.com/alln4tural/ecb3201ef24578e67db31a1d0402b3c3

In general, though:
 
a) Does the error message above provide any insights into what might be wrong with the archive (assuming, as i do, that it's not its sheer size)?
 
b) Would an error be posted when writing an Archive that's too big (or otherwise faulty) to be read back in?
I've been creating a lot of archives, and this incident gives me a sinking feeling ..
 
c) Can some rules of thumb be formulated on how to avoid creating archives that cannot be read back in?
These would be a fine addition to the Object.writeArchive doc, which currently does not even hint at the possibility.
 
d) Is there any way to recover the contents of the archive, split it up in some way ..
(i've been having fun trying to reverse engineer the archive format, but it appears not to be a simple matter of deleting the first or last couple of entries ..)
 
thanks in advance for any ideas.
cheers,
eddi
_______________________________________________ sc-users mailing list info (subscription, etc.): http://www.birmingham.ac.uk/facilities/ea-studios/research/supercollider/mailinglist.aspx archive: http://www.listarc.bham.ac.uk/marchives/sc-users/ search: http://www.listarc.bham.ac.uk/lists/sc-users/search/



--
_______________________________
Brian Heim
507-429-6468

B.M. '14 University of Texas at Austin
M.M. '16 Yale School of Music
Reply | Threaded
Open this post in threaded view
|

Re: Selector table too big during Object.readArchive

alln4tural-list
thanks very much for looking into this, Brian!
Your tip with the evaluation in parts solved my immediate problem.
I continue below in the interest of science, not to complain.
 
> a) There's nothing wrong with the archive, except that it's too big.
 
here's a much bigger one (81 vs. 50 top array elements, 6MB vs. 2MB) that _can_ be read in:
https://gist.github.com/alln4tural/ce68b8a6bf3d865810f7f7bfe71130b5
 
(you can hear it here:
https://soundcloud.com/all-n4tural-firehose-ii/condensate-of-a-live-coding-session-that-ended-20170124-170352
: )

(i note that searching for "selector" in the Help Browser does not lead to the page you mention, i wonder why that is ..)

> b) Yes, an error will be posted every time you exceed that limit.
i meant, an error or warning during Object.writeArchive would be most useful; there was none.

It would be good if the docs mentioned the possibility
http://doc.sccode.org/Classes/Object.html#-writeArchive
 
"Beware that the object you are archiving may not be suitable for reading back in with .readArchive, for instance if it's too big (?), you won't get a warning now, but half a year later when you .readArchive it back in, boy will you get a nice surprise."
 
i know talk is cheap, but the fact that the manual workaround you suggested worked hints at the possibilty that this is a surmountable technical restriction.
i.e. couldn't the .readArchive implementation do the chunking internally?
or could the selector table just be enlarged, Moore's Law and all that?
 
i pursue this because as i mentioned,  i've been creating a lot of archives, and this incident gives me a sinking feeling ..
or is there some other, more robust archiving mechanism i should be using?

cheers!
eddi
 
--
https://soundcloud.com/all-n4tural
_______________________________________________ sc-users mailing list info (subscription, etc.): http://www.birmingham.ac.uk/facilities/ea-studios/research/supercollider/mailinglist.aspx archive: http://www.listarc.bham.ac.uk/marchives/sc-users/ search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Reply | Threaded
Open this post in threaded view
|

Re: Selector table too big during Object.readArchive

ddw_music
alln4tural-list wrote
or could the selector table just be enlarged, Moore's Law and all that?
Unlikely, without changing the structure of the interpreter's bytecodes. Currently a selector can be encoded in one byte. Changing that might be far-reaching, but also might not -- needs research, and devs are already backlogged.

i pursue this because as i mentioned,  i've been creating a lot of archives, and this incident gives me a sinking feeling ..
or is there some other, more robust archiving mechanism i should be using?
Archiving is really a convenience, and programming conveniences are almost never able to handle all cases equally well.

The best solution may be to write your own code to store the data in a way that's tailored to the data structures. Then you can also control how you read it in. File() is the class to start with.

hjh
Reply | Threaded
Open this post in threaded view
|

Re: Selector table too big during Object.readArchive

brianlheim
Hey eddi, thanks for your reply.

> here's a much bigger one (81 vs. 50 top array elements, 6MB vs. 2MB) that _can_ be read in:

Sorry for the confusion—when I say "big" I mean in terms of the number of selectors and class names it uses. This other file must have less than 256 unique identifiers of those types.

> i meant, an error or warning during Object.writeArchive would be most useful; there was none.

This is not .writeArchive's responsibility. It does not and should not know that you'll be reading back in using .readArchive. I know the methods are commonly paired, but in general it's good engineering not to have one method with "knowledge" of another method's operation—you're just setting yourself up for code decay later on when the implementation of one of those changes and whoever does it forgets to update the warning message.

> It would be good if the docs mentioned the possibility

If you want to add this message, I would recommend submitting a PR on GitHub! Probably change the message to be a little less colorful, though ;)

> i.e. couldn't the .readArchive implementation do the chunking internally?

No, that would require some nasty introspection.

> (eddi) or could the selector table just be enlarged, Moore's Law and all that?

> (James) Unlikely, without changing the structure of the interpreter's bytecodes.

Honestly the structure holding these could probably just change from an array to a std::vector with very little (if any) performance cost or refactoring. Will probably not happen in 3.9 but we'll see. I am very interested in improving the old backend of the parser/compiler and since James has also been dipping his toes into that domain (and has been very good about reporting back his results) things are looking promising. But a lot of that code hasn't been touched for over a decade (and usually for good reason).

> (James) The best solution may be to write your own code to store the data in a way that's tailored to the data structures.

This is true; if you have a weird case or run up against language limitations it's honestly going to be a lot faster and easier to roll your own solution than wait for the heavens to align. :) Since you're using .writeArchive to store giant arrays, you could probably just run something like `.do(_.writeArchive)` instead; it would also be trivial to implement your own wrappers that do a bit more than that in terms of bookkeeping. If you're not sure how to do that I can explain more in depth.

Brian


On Fri, Mar 17, 2017 at 11:58 AM, ddw_music <[hidden email]> wrote:
alln4tural-list wrote
> or could the selector table just be enlarged, Moore&#39;s Law and all
> that?

Unlikely, without changing the structure of the interpreter's bytecodes.
Currently a selector can be encoded in one byte. Changing that might be
far-reaching, but also might not -- needs research, and devs are already
backlogged.


> i pursue this because as i mentioned,&nbsp; i&#39;ve been creating a lot
> of archives, and this incident gives me a sinking feeling ..
> or is there some other, more robust archiving mechanism i should be using?

Archiving is really a convenience, and programming conveniences are almost
never able to handle all cases equally well.

The best solution may be to write your own code to store the data in a way
that's tailored to the data structures. Then you can also control how you
read it in. File() is the class to start with.

hjh



--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Selector-table-too-big-during-Object-readArchive-tp7631181p7631203.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.



--
_______________________________
Brian Heim
507-429-6468

B.M. '14 University of Texas at Austin
M.M. '16 Yale School of Music