This work applies new ideas on top of FCOS + ATSS. Highly recommend reading on both network/methods. …
DataLoader is the basic shipped method of preparing and feeding data when training models in pytorch. The official docs does a great job on showing how these two interact to provide an easier, cleaner way to feed data.
But even after following through this great tutorial, I still wasn’t sure how exactly
DataLoader gathered the data returned in
Dataset into a batch data.
Dataset doesn’t restrict the user on how the data should be returned. It can return one object or multiple objects. But how does the
DataLoader know how to bundle multiple return object/objects?
this post is a summary of key points that I found important and added some of my own comments. For more detail and parts that I haven’t introduced in this post, please refer to the paper.
design space is defined by model building parameters which have its own range, and therefore defines the range of possible model structures.
by chasing design spaces instead of individual networks, we can discover general design principles that work across general settings.
The quality of a design space can be measured by evaluating the network architectures sampled from the design space, and evaluating the sampled architectures. …
arxiv link: https://arxiv.org/abs/1611.05431
This paper introduces resnext architecture, which is built mainly upon resnet. Naturally, it gives insight on how resnext differs and excels from resnet.
the key difference in resnext architecture is that it uses different residual block structure compared to resnet. This difference is well depicted in the following figure.
nautilus . command work in a headless server.
$ sudo apt install nautilus
I’ve tried installing a display manager, but instead I succeeded with directly starting the x server with
xinit. related link
$ sudo apt install xinit
if it is a headless server, then the default xorg confs(located at
/usr/share/X11/xorg.conf.d) is not going to work because it doesn't have a real physical screen.
To be specific, I’ve tried launching the X server with
$ sudo systemctl start lightdm
However, this does not work. We can see why it doesn’t work inside the xorg log file located…
When creating a training script in tensorflow, there rises the need to sometimes add summary items(summary protobufs to be exact) later on in the same step.
For example, lets say a training session is in play with a metric calculation step included. Periodically, I want to run a prediction with a validation/test data and record the metrics for these predictions along with the summary writer used to log the process of the training steps. In other words, a tensorboard image like the following is desired:
In the above capture, loss and metric is recorded for every training step. On the…
I have been studying Yolov2 for a while and have first tried using it on car detection in actual road situations. I used
tiny-yolo as the base model and used the pre-trained binary weights. While it recognized cars very well with traditional full-shot car images like the ones that a person can see in a commercial, it did not work well in car images that a driver would see in the driver's seat.
Clearly, the pretrained model was not trained with driver’s POV car images. In order to gather some data, I took the liberty of copying the blackbox videos…