Real Image Denoising Challenge

myungje.lee

New member
Dear organizer,

How would you change the trained-model into the input shape (1, None, None, 3).

Let's say that

model = tf.keras.models.load_model(MODEL_PATH)

then what should I do?

Previously, I simply, "new_model = Model(height = 2432, width = 3200)", then, "new_model.set_weights( model.get_weights() )"

I tried, "Model(height = None, width = None)" but does not work.

I have already read "https://github.com/aiff22/MAI-2021-Workshop/blob/main/tensorflow_to_tflite.py" .

Any tips or suggestions would be helpful..

Thank you
 

myungje.lee

New member
1. Are you using TF-nightly?
2. Have you enabled the experimental_new_converter option?
Thank you very much for the fast reply.

1. No, i am using TF 2.3.0
2. I cannot even get to 'converter' part.
Once I "model = tf.keras.models.load_model(MODEL_PATH) "
And print(model) , print(type(model)) , print(model.summary()) gives just like the screen shot that i attached.
How would you change the input shape (1, None, None, 3) with this kind of situation.
I think this has to do with something like 'graph' mode or something. Also in the code here, "https://github.com/aiff22/MAI-2021-Workshop/blob/main/tensorflow_to_tflite.py", how would you load weight ?

Also, is there anyway that i could construct input shape (1, None, None, 3) model from 'trained weights' (by using load_weights).

**Edit, Moreover, I am using this kind of style codes to construct my model ( https://www.tensorflow.org/tutorials/generative/pix2pix )

def Generator():
inputs = tf.keras.layers.Input(shape=[256, 256, 3])

down_stack = [
downsample(64, 4, apply_batchnorm=False), # (bs, 128, 128, 64)
downsample(128, 4), # (bs, 64, 64, 128)
downsample(256, 4), # (bs, 32, 32, 256)
downsample(512, 4), # (bs, 16, 16, 512)
downsample(512, 4), # (bs, 8, 8, 512)
downsample(512, 4), # (bs, 4, 4, 512)
downsample(512, 4), # (bs, 2, 2, 512)
downsample(512, 4), # (bs, 1, 1, 512)
]

up_stack = [
upsample(512, 4, apply_dropout=True), # (bs, 2, 2, 1024)
upsample(512, 4, apply_dropout=True), # (bs, 4, 4, 1024)
upsample(512, 4, apply_dropout=True), # (bs, 8, 8, 1024)
upsample(512, 4), # (bs, 16, 16, 1024)
upsample(256, 4), # (bs, 32, 32, 512)
upsample(128, 4), # (bs, 64, 64, 256)
upsample(64, 4), # (bs, 128, 128, 128)
]

initializer = tf.random_normal_initializer(0., 0.02)
last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4,
strides=2,
padding='same',
kernel_initializer=initializer,
activation='tanh') # (bs, 256, 256, 3)

x = inputs

# Downsampling through the model
skips = []
for down in down_stack:
x = down(x)
skips.append(x)

skips = reversed(skips[:-1])

# Upsampling and establishing the skip connections
for up, skip in zip(up_stack, skips):
x = up(x)
x = tf.keras.layers.Concatenate()([x, skip])

x = last(x)

return tf.keras.Model(inputs=inputs, outputs=x)

I am struggling with this issue...
Thank you very much.
 

Attachments

  • Screenshot from 2021-03-21 01-42-17.png
    Screenshot from 2021-03-21 01-42-17.png
    87.1 KB · Views: 3
Last edited:

Andrey Ignatov

Administrator
Staff member
Hello,could update the run time results of these two days, please ?

As was mentioned above, all runtime values are computed by Samsung Exynos team - it is very likely that they don't have access to the corresponding hardware now as it is Sunday.
 

Andrey Ignatov

Administrator
Staff member
@Msss, all submitted final models are running fine on our Samsung S21 dev phone, we are now waiting for the results from Samsung. In case they are unable to run some models - our runtime values will be used then.
 
Top