My feeling is that spculation by humans about the motivations and
actions
of an SI (AI or any other type) is not worthwhile. It's equivalent to
ants speculating about humans, or humans speculating about God.
So, on to the worthless speculation! An SI has no reason to deliver an
ultimatum. the SI can simply take human civilization and modify it at
the optimal rate consistent with the SI's (incomprehensible) goals.
For example, let's arbitrarily assume that the SI has determined that
revealing itself to humanity will slow our "progress" toward whatever
its goal for us is, but, being a benign being, the SI wants to stop
wars.
In this (arbitrary) scenario, the SI cannot instantly stop all wars
without
revealing itself, but it might guide humanity to prevent big wars and
minimize small ones. I think your "one week" ultimatum is unlikely. The
SI simply take control of the bodies of anybody that breaks the SI's
rules, without even letting them know.
Basically I don't believe that SIs that are only somewhat more
intelligent
than humans will exist, except very transiently, because an SI that's
only
somewhat more intelligent than a smart human can make itself smarter
still. This feedback mechanism has no obvous limit until the SI has
godlike powers, and the transformation will occur in days or weeks.