It means most of the challenges' preliminary results were published. Like "Compressed Image Single Image Superresolution" which I received the preliminary results. No news on this one yet. Either you rank 1st or the last if you have participated with a factsheet and required files. You will be...
Hi when should we expect the preliminary results?
This is what the page states "03.23.2021 Preliminary test results release to the participants"
Is it going to be later today or any change in the initial plans?
I have solved my problem, if you need some sort of linear mapping include it at the end of the model as a linear operator and as a floating point operator assuimg ranges 0-1 then use the tflite quantization it takes care of everything and calculates quantizations differently approximates the...
you should do the scaling (scale zero point etc) and let tflite converter do its job for the quantization part. It should be like
lr_image = cv2.imread(filename)
#no extra code here
sr_image = super_duper_model(lr_image) #here sr_image is 0-255 uint8
#no extra code here
cv2.imshow(sr_image) #voila!
My understanding is
raw_img = cv2.imread(imgName)
is the input
so anything below that should be integrated into the code
especially these parts
raw_img = raw_img / input_scale + input_zero_point
and
sr = (sr - output_zero_point) * output_scale
so basically these are all "unfortunately"...
dont use quantize model use quantize annotate layers for individual layers. the problem is first you use quantize annotate layers in upsample and the you try to quantize entire model so you already have a quantize annotated layer but quantize model tries to do annotation for all of the layers...
Even when I use Quatization Aware Training, when I convert from a model obtaining around 30dB it drops to 22-23dB's what am I doing wrong while quantizing can any body comment on this?
Here is a minimal working example with the problem
class KerasLite:
def __init__(self,interpreter...
When are we going to get feedback for our recently sent models to the codalab server. It's been 1,5 days and still didn't get run result from the spreadsheet. This feedback is especially important since we dont have access to the hardware and not even know its architecture and drawbacks...
while using tflite in PC the model is not using the GPU and falling back to CPU since the tflite interpreter is not optimized for Nvidia GPUs but for mobile GPUs but still 779s is a lot. it seems like your model is kind of "huge" for the challenge, Try a very simple model convert it to tflite...