Thứ Năm, 2 tháng 2, 2017

Waching daily Feb 2 2017

(DCV For Live)

For more infomation >> (DCV For Live) - Ontdek de geheimen van de goudmijn in het bos, ontdekken opwindend leven - Duration: 10:04.

-------------------------------------------

(DCV For Live) - Gebruik de katapult jacht en worst verwerking, ervaar het leven in het bos - Duration: 11:03.

For more infomation >> (DCV For Live) - Gebruik de katapult jacht en worst verwerking, ervaar het leven in het bos - Duration: 11:03.

-------------------------------------------

(DCV For Live) - Experience vangst schildpadden, vis en de verwerking van levensmiddelen in het - Duration: 8:07.

For more infomation >> (DCV For Live) - Experience vangst schildpadden, vis en de verwerking van levensmiddelen in het - Duration: 8:07.

-------------------------------------------

COMISSÃO ORGANIZADORA DOS REGISTROS HISTÓRICOS DA CÂMARA MUNICIPAL E DA CIDADE DE MARILIA - Duration: 52:04.

For more infomation >> COMISSÃO ORGANIZADORA DOS REGISTROS HISTÓRICOS DA CÂMARA MUNICIPAL E DA CIDADE DE MARILIA - Duration: 52:04.

-------------------------------------------

Imersão Marajó - DaTribu e Mãos Caruanas 2017 - Duration: 3:33.

Kátia Fagundes - Craftswoman: I am here diving in the woods, connecting a new inspiration for a new collection that came through a partnership with Mãos Caruanas.

Sâmia Batista - Product Designer: Into this research process, we're listening to their stories, both Josi's and Kátia's.

And from their origins and the techniques they manage, we will start the process for this new collection developement.

Tainah Fagundes - Da Tribu Creative Partner: In this relationship, once more, we strong believe in the collective strength

this feminine meaning into this work, we hope to achieve collection that will surprise you.

We are at the initial moment which means the research moment, the information exploration.

At first, it came the urge to build a company that would develop natural jewelry lined up with both marajoara graphs and mystical marajoara indigenous graphs came from Caruanas

Josie Lima - Craftswoman: which have very big energetic strength and that is what we like to work with, this energetic strength behind it.

Every line, every sketch is a powerful place, is a protection place, is a line that protects the body.

There are many stories, many narratives that we believe that are very rich and why not put this into design pieces? Put Into a natural jewelry to carry it.

The first word that comes to mind, the first feeling that can gather us in the creation, connect us, right? That sharing thing.

I've been thinking that is the feminine empowering, you know?!

Marajoara history is basically made of women. So, this whole feminine world is right here!!

I believe that will be a very beautiful collection. A very organic collection. And that will brings the story and characteristics

from both Kátia, from Da tribu, and Josi, from Mãos Caruanas.

Made by women and produced by women! I am very excited and hopeful that people will like. I wish we can travel the world, cross oceans!

So wait for it!! Follow it that is coming a fused collection between Belém and Marajó, that is Mãos Caruanas and Da Tribu.

For more infomation >> Imersão Marajó - DaTribu e Mãos Caruanas 2017 - Duration: 3:33.

-------------------------------------------

Encodage de l'information 2 : Compression - Duration: 8:34.

Hi everybody!

In the last video we have seen how to encode information in general

In this video we will see how to make the information as small as possible in the computer

We will talk about compression

We must distinguish two types of compression : lossless compression and lossy compression

lossless compression is used for example for text or zip folder

which means we can't allow to lose any information

on the other hand with lossy compression, which is used typically for image, sound, video

we can afford to lose some information

So for an image if I don't encode all information it won't be too perceptible by the human eye

so we can afford to discard that information

Let's start with lossless compression

There are several methods, we are gonna see one called Huffman encoding

The idea is intuitive : in a text you have characters appearing more often than others

So we could encode the most frequent ones with a short sequence of bits

and the rarest ones with a long sequence of bits

If you take for example a string with only 5 different characters a,b,c,d,e

We have seen last time how we can encode each letter with a sequence of bits

but the same length for each letter

But actually it is not necessary we can encode each letter with a sequence of different length

The list of letters along with their encoding sequences is called a encoding dictionary

we can easily represent these encoding with what we call an encoding tree, like this

So we have here our encoding tree

and here we have a sequence we want to decode using the tree

so we start with the first bit 0

and from the beginning of the tree we follow the symbol so here wo go to the left

we start again with the second bit

we continue to the left and find the symbol 'a'

so we know that these 2 bits of information represent the symbol 'a'

Then we start again from the top of the tree with the third bit 0

So we go again to the left but this time the 4th bit is 1 so we go right

we find the symbol 'b' and so we know these 2 bits represent the symbol 'b'

then the 5th bit is 1 so this time from the top of the tree we go to the right

then we have a 0 so we go to the left

then we have a symbol 1 so we go to the right and find the symbol 'd'

so we see that these 3 bits are the symbol 'd'

similarly we then have a 1 and again a 1 and find the symbol 'e'

and then we have 100 if you follow the tree you can see it is the symbol 'c'

So this sequence is abdec

So now imagine in this text there are 200 a, 300 b, 40 c, 60 d and 400 e

so we have a text of 1000 characters

Now if I try to encode this text with the first dictionary with a constant length of 3 bits per symbol

it will give us a length : **on screen**

Now if I try to encode the text with the second dictionary

this time we have 2 bits to encode a,b and e

So the total length will be : **on screen**

As you can see, only by changing the dictionary, with the most frequent symbols

encoded with a shorter sequence of bits, we managed to go from 3000 bits to 2100 bits for the same text

We have then saved 900 bits which is a 30% compression rate

another way to calculate the compression is with the probabilities of each symbol's appearance

so for example if you consider 'a' there are 200 'a' in the text out of 1000 characters

so the probability that a given symbol is an 'a' is 0.2

similarly for the other symbols we have these probabilities

We can now calculate the mean length of a symbol in bits

to that end we sum over all probabilities multiplied by their sequence length

So for the first dictionary it gives us a mean length of 3 bits per symbol

and for the second dictionary it gives us a mean of 2.1 bits per symbol

We could wonder if we tried a third dictionary if we could achieve an even better compression

But we won't test all possible encodings to find the one that gives us an optimal compression

For that we have a method called the Huffman algorithm that allows us to find the perfect encoding that maximizes the compression

We will in another module exactly what is an algorithm in details

but for now you only need to know that it is like a recipe, a method to solve a problem

In our case we have list of symbols with their probabilities and we want a method to find

the optimal encoding maximizing the compression

So with the Huffman algorithm we will construct an optimal tree

I will write the different symbols along with their probabilities

**on screen**

Then we will take the 2 symbols with the lowest probabilities

and put them together to build the tree from these 2 symbols

Here with c and d I can build the tree from these 2

Here we indicate a bit 0 and a bit 1

and then i add these 2 probabilities which gives us 0.1

then from this probability and the 3 remaining, i start over the same process

so i again take the 2 lowest probabilities, in this case 'a' with 0.2 and 0.1

and i link them

I again write 0 and 1, I add them and it gives me 0.3

and we start over : we take the 0.3 with the 'b' (also 0.3) because these are the 2 lowest probabilities

I link them and i gives me 0.6

Then i only have the 'e' left. Of course the sum of the last 2 probabilities should always be 1

With the Huffman algorithm we could compress even more

We can prove, but we won't do it here, that the Huffman algorithm is always optimal

which means there is no other better encoding to compress

Now if we have a text of 200 pages with 1000 characters per page so 200000 characters

we will see how many bits it takes with our different encodings

If we take our first encoding it gives us **on screen**

With our second modified encoding : **on screen**

and if we take our third encoding with the Huffman algorithm it gives us : **on screen**

Let's move on to lossy compression

There are several methods, I will present here the main method

In essence, information like images, sound or videos can be seen as a signal with frequencies

It is possible to filter these frequencies to keep only the most important

meaning those that are the most perceptible for the human senses

This way we can compress information a lot

In particular, if you consider a JPEG image, we can reduce its size up to 20 times without the loss being perceptible to the human eye

The presentation I gave you is superficial because to really understand this method

You must have some knowledge about signal processing, integral calculus and Fourier transforms

and so i will not go into details here but for those interested here is a link towards a more advanced video

I hope you understood this introduction to data compression and see you soon for a new video!

For more infomation >> Encodage de l'information 2 : Compression - Duration: 8:34.

-------------------------------------------

The Ottawa Hospital Cancer Centre 5/ Le Centre de cancérologie de L'Hôpital d'Ottawa - Duration: 0:40.

For more infomation >> The Ottawa Hospital Cancer Centre 5/ Le Centre de cancérologie de L'Hôpital d'Ottawa - Duration: 0:40.

-------------------------------------------

Logiciel #BIM de conception architecturale 3D - #Edificius - 008 - Duration: 1:02.

For more infomation >> Logiciel #BIM de conception architecturale 3D - #Edificius - 008 - Duration: 1:02.

-------------------------------------------

Laurence inc. | Épisode 5 saison 1 - Monsieur Flanagan - Duration: 2:10.

For more infomation >> Laurence inc. | Épisode 5 saison 1 - Monsieur Flanagan - Duration: 2:10.

-------------------------------------------

Peut-on donc conduire avec une déficience visuelle? - Duration: 6:47.

For more infomation >> Peut-on donc conduire avec une déficience visuelle? - Duration: 6:47.

-------------------------------------------

Complément : Signal Processing - Duration: 3:00.

Hi everybody!

Welcome to this complement video about signal processing and lossy compression

It is a complement to my main video about compression that you can watch here

In this video i will present the basic mathematics of signal processing

and how we can do lossy compression with it

I assume here you already have some basics mathematics knowledge, in particular about Fourier transforms

The idea of signal processing is that analog information can be seen as a function we call 'signal'

In the case of sound, it is a function from R to R

I remind you here the definition of a Fourier transform

which shows we can express a function as a sum of sines and cosines

We use here the Euler's formula to convert an exponential to a sine and cosine

We will then represent a signal as a discrete sum of sines

Here for each sine 'ai' denotes the amplitude of the signal, 'fi' the frequency and 'di' the phase

An important notion we must define for a signal is what we call the bandwidth

which is nothing else but the maximum frequency of the signal

When we have an analog signal we want to encode

We will choose different points of the function and encode each of them in the computer

This process is called 'sampling'

So now we don't have a signal defined at every point t anymore

But a signal only defined at points nTe

Te is the sampling period

and the inverse of the sampling frequency

So you can imagine that the more points we have the better we will be able to reconstruct the original signal

Which means the sampling frequency must be high

How high must it be?

Well we have the sampling theorem that tells us the sampling frequency must be greater than twice the bandwidth

So that we can reconstruct the original analog signal

So if this condition holds, it is possible to reconstruct perfectly the signal

with the interpolation formula

As you can see it is only a theoretical formula since we sum over all numbers in Z

So this reconstructed signal is characterized by frequencies

and we will be able to apply filters on these frequencies

one of the simplest filter is the low-pass filter

which is defined with a cutoff frequency fc

we will simply remove all frequencies greater than the cutoff frequency

We then have what we call the moving-average filter

which consists in integrating the signal over a predefined period

So actually it is a mean over a predefined interval

So with these filters we can delete the unnecessary information

and keep only the most important data

and compress the information

I hope you understood the basics of signal processing and lossy compression

I also hope you enjoyed this more advanced and technical format

feel free to comment if you want something even more advanced or on the contrary something more accessible

And see you soon for a new video!

Không có nhận xét nào:

Đăng nhận xét