디시인사이드 갤러리

마이너 갤러리 이슈박스, 최근방문 갤러리

갤러리 본문 영역

07/21 BVSC

LuisGarcia갤로그로 이동합니다. 2020.07.21 23:18:09
조회 130 추천 0 댓글 0
														


viewimage.php?id=2fabc325&no=24b0d769e1d32ca73fed8ffa11d028317805b44c4c832ef9bd9f21ca3f36a89c7fa2cbd3ad898ef02129de304294cae71bb92cb9a3310c09abb583134b99f1

BVSC 이야기: 부에나 비스타 소셜 클럽은 쿠바 뮤지션이다


Machine Learning -> ability to acquire their own knowledge

ability -> extracting patterns from raw data


logistic regression과 softmax function과의 관계

https://ratsgo.github.io/machine%20learning/2017/04/02/logistic/

class에 해당하는 확률을 맞추는 형태이므로 softmax형태가 되는 것이라 생각함


Data의 Representation이 중요한 것에서 나온 분야 -> representation learning

대표적으로 auto encoder가 있음 encoder (representation을 바꿔줌), decoder (to original representation)

original data의 feature를 놓치지 않으면서 새로운 feature를 많이 만들어 낼 수 있게


factors of variation -> factor = seperate sources of influence


Linear Regression의 한계(ex. XOR)을 해결하는 perceptron


Historical Trend


Most neural networks today are based on a model neuron called the rectified linear unit.(ReLU)


1. Kernal Machines 2. Bayesian statistics -> 찾아봐야 할 것


It is worth noting that the effort to understand how the brain works on

an algorithmic level is alive and well. This endeavor is primarily known as

“computational neuroscience” and is a separate field of study from deep learning.


It is common for researchers to move back and forth between both fields. The

field of deep learning is primarily concerned with how to build computer systems

that are able to successfully solve tasks requiring intelligence, while the field of

computational neuroscience is primarily concerned with building more accurate

models of how the brain actually works.


SVM 참고 https://bskyvision.com/163


Ch.2 Linear Algebra


The span of a set of vectors is the set of all points obtainable by linear combination

of the original vectors.


Sometimes we need to measure the size of a vector. In machine learning, we usually

measure the size of vectors using a function called a norm.


viewimage.php?id=2fabc325&no=24b0d769e1d32ca73fed8ffa11d028317805b44c4c832ef9bd9f21ca3f36a89c7fa2cbd3ad898e9d4f20de304d94c9e72f026a8cf4f76d407556d0cada2a838b


The squared L2 norm is more convenient to work with mathematically and

computationally than the L2 norm itself. For example, the derivatives of the

squared L2 norm with respect to each element of x each depend only on the

corresponding element of x, while all of the derivatives of the L2 norm depend

on the entire vector. In many contexts, the squared L2 norm may be undesirable

because it increases very slowly near the origin. In several machine learning

applications, it is important to discriminate between elements that are exactly

zero and elements that are small but nonzero.


In these cases, we turn to a function

that grows at the same rate in all locations, but retains mathematical simplicity:

the L1 norm.


The L1 norm is commonly used in machine learning when the difference between

zero and nonzero elements is very important.


We write diag(v) to denote a square

diagonal matrix whose diagonal entries are given by the entries of the vector v.

Diagonal matrices are of interest in part because multiplying by a diagonal matrix

is very computationally efficient.


Symmetric matrices often arise when the entries are generated by some function of

two arguments that does not depend on the order of the arguments. For example,

if A is a matrix of distance measurements, with Ai,j giving the distance from point

i to point j, then Ai,j = Aj,i because distance functions are symmetric.


A vector x and a vector y are orthogonal to each other if xTy = 0.

An orthogonal matrix is a square matrix whose rows are mutually orthonormal

and whose columns are mutually orthonormal:

AT A = A AT = I. A−1 = AT,


Eigendecomposition, SVD, PCA -> MIT Opencourseware Linear Algebra

Av = λv.

A = V diag(λ)V−1.


Specifically, every real symmetric

matrix can be decomposed into an expression using only real-valued eigenvectors

and eigenvalues:

A = QΛQT,

where Q is an orthogonal matrix composed of eigenvectors of A, and Λ is a

diagonal matrix.


The Moore-Penrose Pseudoinverse

When A has more columns than rows,

A+ = V D+ UT를 이용하여 the solution x = A+ y with minimal Euclidean norm ||x||2 among all possible

solutions.

When A has more rows than columns, it is possible for there to be no solution.

In this case, using the pseudoinverse gives us the x for which Ax is as close as

possible to y in terms of Euclidean norm ||Ax − y||2.


The Determinant

The determinant is equal to the product of all the

eigenvalues of the matrix. The absolute value of the determinant can be thought

of as a measure of how much multiplication by the matrix expands or contracts

space. If the determinant is 0, then space is contracted completely along at least

one dimension, causing it to lose all of its volume. If the determinant is 1, then

the transformation preserves volume.


PCA부분은 Linear Algebra듣고 다시 볼 것


Chapter 3

Probability and Information

Theory


we use probability

theory in two major ways. First, the laws of probability tell us how AI systems

should reason, so we design our algorithms to compute or approximate various

expressions derived using probability theory. Second, we can use probability and

statistics to theoretically analyze the behavior of proposed AI systems.


While probability theory allows us to make uncertain statements and reason in

the presence of uncertainty, information theory allows us to quantify the amount

of uncertainty in a probability distribution.



추천 비추천

0

고정닉 0

0

댓글 영역

전체 댓글 0
등록순정렬 기준선택
본문 보기

하단 갤러리 리스트 영역

왼쪽 컨텐츠 영역

갤러리 리스트 영역

갤러리 리스트
번호 제목 글쓴이 작성일 조회 추천
설문 연인과 헤어지고 뒤끝 작렬할 것 같은 스타는? 운영자 24/04/22 - -
공지 부에나 비스타 소셜 클럽에 관련된 글을 올려주시기바랍니다 LuisGarcia갤로그로 이동합니다. 20.02.23 47 0
61 12 지키자구요갤로그로 이동합니다. 23.09.26 18 0
59 구글 맵 가는 길 사진 모음 LuisGarcia갤로그로 이동합니다. 23.07.04 14 0
58 3일차 구글지도 사진 보충 LuisGarcia갤로그로 이동합니다. 23.07.04 14 0
57 5일차 귀국 LuisGarcia갤로그로 이동합니다. 23.07.04 10 0
56 4일차 본사진 LuisGarcia갤로그로 이동합니다. 23.07.04 13 0
55 3일차 본사진 2 LuisGarcia갤로그로 이동합니다. 23.07.04 12 0
54 3일차 본사진 1 LuisGarcia갤로그로 이동합니다. 23.07.04 11 0
53 2일차 본사진 2 LuisGarcia갤로그로 이동합니다. 23.07.04 11 0
52 2일차 본사진 1 LuisGarcia갤로그로 이동합니다. 23.07.04 14 0
51 1일차 본사진 LuisGarcia갤로그로 이동합니다. 23.07.04 15 0
50 3일차 구글지도 보충 LuisGarcia갤로그로 이동합니다. 23.07.04 12 0
49 아라시야마 구글 사진 LuisGarcia갤로그로 이동합니다. 23.07.04 13 0
48 5312531523 LuisGarcia갤로그로 이동합니다. 23.06.29 10 0
47 1245 LuisGarcia갤로그로 이동합니다. 23.06.29 8 0
46 123 LuisGarcia갤로그로 이동합니다. 23.06.27 9 0
45 린치의 특징 LuisGarcia갤로그로 이동합니다. 23.01.19 72 0
44 망상대리인 감상 후기 및 평가 LuisGarcia갤로그로 이동합니다. 23.01.19 84 0
43 ㅇㅇ ㅇㅇ(110.13) 22.10.20 10 0
39 11 [8] LuisGarcia갤로그로 이동합니다. 22.04.02 20 0
32 LA LuisGarcia갤로그로 이동합니다. 20.10.05 25 0
31 123 LuisGarcia갤로그로 이동합니다. 20.09.08 34 0
30 BVSC 5 LuisGarcia갤로그로 이동합니다. 20.07.28 41 0
29 BVSC4 LuisGarcia갤로그로 이동합니다. 20.07.25 32 0
28 0724 BVSC Ch.3 LuisGarcia갤로그로 이동합니다. 20.07.24 36 0
07/21 BVSC LuisGarcia갤로그로 이동합니다. 20.07.21 130 0
25 12 LuisGarcia갤로그로 이동합니다. 20.07.21 45 0
7 오늘의 BVSC 사진과 파스타 [1] LuisGarcia갤로그로 이동합니다. 20.03.12 93 1
5 멤버소개 3편 - Rubén González LuisGarcia갤로그로 이동합니다. 20.02.23 53 0
4 멤버소개 2편 - Omara Portuondo LuisGarcia갤로그로 이동합니다. 20.02.23 58 0
3 멤버소개 1편 - Ibrahim Ferrer LuisGarcia갤로그로 이동합니다. 20.02.23 92 0
1 찬찬히 첫 글 남기고 갑니다 ㅇㅇ(121.181) 20.02.22 87 0
1
갤러리 내부 검색
제목+내용게시물 정렬 옵션

오른쪽 컨텐츠 영역

실시간 베스트

1/8

뉴스

디시미디어

디시이슈

1/2