采集1.电影名或是电视剧名要区分是电影还是电视剧,2上映时间,年限即可 3.播放链接
https://www.zxzj.fun/list/1.html
from 1 to 6.html
query:
https://www.zxzj.fun/vodshow/1-----------2022.html
采集1.电影名或是电视剧名要区分是电影还是电视剧,2上映时间,年限即可 3.播放链接
https://www.zxzj.fun/list/1.html
from 1 to 6.html
query:
https://www.zxzj.fun/vodshow/1-----------2022.html
Consider recording the media before real-time processing.
Scan the object via taobao streaming and make it dance.
Transplant lolita pictures to bilibili.
Share dialogs/info from soul/qq/wechat.
Repurpose a wide range of streaming platforms.
use ocr to filter out text info
find new title from danmaku or comments
我们都知道,视频主要由画面和音频组成。但还有一个元素同样包含了巨大的信息量 —— 字幕。结合现有的自然语言处理模型,我们便能实现对话抽取式的自动剪辑。PS: 软件图标是我妹妹Hannah设计滴~
项目开源地址:https://github.com/COLOR-SKY/DialogueExtractor
学业之余我会陆续更新工具教程。
we first see the world, get the observation and respond in the form of content. it is a feedback loop.
to search components in videos, first take screenshots then do image search, then use the keywords to get the source video.
breakdown approach:
granualize every step, showing all possibilities to get content created and then optimoze it using standards.
filter approach:
establish some topics, create topic specific approaches to arrange the content, choose the best among all topics.
are they compatible? are you sure it is modular, scalable and extensible?
for novices, they have few unpolished ideas and waiting to realize it using code. but it lacks the feedback loop and thus you are unable to change yourself according to the reaction. breakdown approach must be used to automate the optimization, and topic based approach is simple at first hand.
to avoid copyright issues search for google.
topic based approach assues the public always have something in common and thus you only search specific things at first hand. they are easy to control, static and consistent. breakdown approach is where the evolution begins.
let’s assume our topic is about pets on weibo. pets have different kinds and the content creaters are different from each other. all we do is to download and upload. we get descriptions from our viewers, video play counts and various feedback. we improve the source by our feedback, searching for more untouched contents and more mixes like video/audio crossing.
breakdown approach is demostrated first-hand with our actor-critic model. we first view all possible posts from all sources, find what’s interesting and repost it to our target platform. this is likely to be cheating. we again choose our sources, our approach of modification based on feedback. topics are generated from the very first step.
the model of interests, which generates the topic, is the key breakdown approach. we have to eventually construct a breakdown approach to boost our searches in every aspect. feedback is one of those key features. we eventually have to view the content with the machine. suggest using the breakdown approach now.
anatomy of the post:
first thing it would be postable, according to our mandatory order. it would not be taken down or banned for a long time. banning detection is required and usually simple to test against.
second it is most profitable. we only prefer those tasks which give the most output. occasionly we choose something fresh despite lower expectations.
third it would be resourceful. consistently pinning audience in a series of videos is undoubtably competitent. this can be reached by utilizing our creativity engine based on comments and imagination, realize the unrealized.
have not yet found anything systematic on giving the full detail of such automated content creation system. we only pick up those pieces. it is important to make the entire design flexible and create miniature tests to fabricate the system. like any other famous writer/director, you could only name it but not reproduce it.
hands on the approach, no matter it is inspired by anyone or anything, it is time to begin, to complete the feedback loop.
not a pipe, but a loop.
we demonstrate the loop using fake data, then the real ones. maybe the initial topic is also meant to be fake data. the real world data is too stochastic for us to imagine. better construct something specific.
If present in current categories, choose a number.
otherwise, provide a name. number will be automatically assigned.
regression test in stock market, which can also apply to bilibili
interatomic potentials:
https://www.ctcms.nist.gov/potentials/
https://zhuanlan.zhihu.com/p/351829537
https://www.ctcms.nist.gov/potentials/system/C-H-O/
https://atb.uq.edu.au/index.py?tab=structure_search
https://github.com/dwsideriusNIST/LAMMPS_Examples
opensmog smog2.provides force field generation tool
run simulation under given temperature, pressure and get density
openmm
generate force field on the fly:
from openff.toolkit.topology import Molecule
molecule = Molecule.from_smiles(‘c1ccccc1’)
pymatgen contains polymer generator to lammps:
pymatgen.io.lammps.utils
simulating reaction in molecular dynamics:
implemented in lammps fix bond/react method
random.randomvoidmail@erine.email (pending approval)
seen polymer names on lammps demo website:
https://lammps.org/pictures.html#reactphoto
https://docs.lammps.org/Intro_features.html
If you are a new computational chemist I would advise you to use ASE, it is not only useful for nanoparticles, I’m using it nearly every day.
patent:
http://chemdataextractor.org/results/26088052-3833-41ea-98f1-0a8a3fb2c341
https://www.zhihu.com/question/50559712
moltemplate, packmol
vmd: lammps data file visualization
build input file for lammps:
get retrosynthesis training data on picture search engines
octa: predict polymer properties
https://octa.jp/references/examples/
Link: [5]http://oexchange.org/spec/0.8/rel/related-target
You can try [33]https://spaya.ai/ it is a retro-synthetic analysis
[37]Http://www.orgsyn.org/
[41]Http://www.orgsyn.org/
[45]Http://www.organic-chemistry.org/
Try this interesting blog: [49]http://totallysynthetic.com/blog/
And also this website: [50]http://chemistrybydesign.oia.arizona.edu/
[54]http://www.chemspider.com/
[58]https://pubchem.ncbi.nlm.nih.gov/search
Organic Syntheses Website: <[62]http://www.orgsyn.org
Organic Chemistry Portal: <[63]http://www.organic-chemistry.org/abstracts
Chemsynthesis: [64]http://www.chemsynthesis.com
ChemExper: [65]https://www.chemexper.com
Pub Chem compound: [67]http://pubchem.ncbi.nlm.nih.gov
E-molecules: [68]http://www.emolecules.com
Chemspider: [69]http://www.chemspider.com
Reaxys: [70]https://www.reaxys.com/
SciFinder: [71]http://www.cas.org,
STN: [72]https://stnweb.cas.org/
[80]https://www.vulcanchem.com/
gromacs: creating polymer structure
http://www.gromacs.org/Documentation_of_outdated_versions/How-tos/Polymers
latest gromacs documentation:
https://manual.gromacs.org/documentation/
online organic chemistry textbook:
https://www2.chemistry.msu.edu/faculty/reusch/virttxtjml/intro1.htm
openbabel can only run normally on x86 platforms. so do other cheminfo packages.
sources of organic synthesis
https://www.organic-chemistry.org
What is matsci.org?
matsci.org is a community forum for the discussion of anything materials science, with a focus on computational materials science research. Its members are typically from academic research institutions and universities.
People that currently help run matsci.org include maintainers of the following codes and collaborations:
OVITO
GULP
DL_POLY
OPTIMADE
pyiron
hiphive
ASE
MPDS
iFermi
LAMMPS
MaRDA
exciting
JARVIS
and members of the following research groups:
Hacking Materials Group
Persson Group
Materials Virtual Lab
Materials Intelligence
translate bigsmiles into smiles
polymer database:
PolyInfo and NIST Synthetic Polymer MALDI Recipes database
USPEX
chemdraw chemoffice indraw spaya.ai
reaxys scifinder-n
marvin sketch pka
https://github.com/PKUMDL-AI/AutoSynRoute
polymer simulation:
material studio
amsterdam modeling suite
cp2k orca
https://orcaforum.kofo.mpg.de/app.php/portal
chemistry in stack exchange:
https://chemistry.stackexchange.com/
polymer retrosynthesis using retro*:
seq2seq-retro mlp-retro polyretro-uspto
deepchem, chempy(inorganic)
avogadro: import openbabel files
odachi: decompose target molecular into source molecular, highlight the potential bond
rdkit: python chemistry informatic
polymer informatic
ab initio chemistry:
lammps, quantum espresso, nwchem, gamess, uspex
from https://www.webmol.net:
Gamess, Gaussian, MolPro, Mopac, NWChem, Orca, PQS, PSI, Q-Chem, TeraChem, Tinker, Quantum Expresso, and VASP
描述
明明想在学校中请一些同学一起做一项问卷调查,为了实验的客观性,他先用计算机生成了 N 个 1 到 1000 之间的随机整数( N≤1000 ),对于其中重复的数字,只保留一个,把其余相同的数去掉,不同的数对应着不同的学生的学号。然后再把这些数从小到大排序,按照排好的顺序去找同学做调查。请你协助明明完成“去重”与“排序”的工作(同一个测试用例里可能会有多组数据(用于不同的调查),希望大家能正确处理)。
注:测试用例保证输入参数的正确性,答题者无需验证。测试用例不止一组。
当没有新的输入时,说明输入结束。
数据范围: 1 \le n \le 1000 \1≤n≤1000 ,输入的数字大小满足 1 \le val \le 500 \1≤val≤500
输入描述:
注意:输入可能有多组数据(用于不同的调查)。每组数据都包括多行,第一行先输入随机整数的个数 N ,接下来的 N 行再输入相应个数的整数。具体格式请看下面的”示例”。
输出描述:
返回多行,处理后的结果
示例1
输入:3
2
2
1
11
10
20
40
32
67
40
20
89
300
400
15复制
输出:1
2
10
15
20
32
40
67
89
300
400复制
说明:示例1包含了两个小样例!!
输入解释:
第一个数字是3,也即这个小样例的N=3,说明用计算机生成了3个1到1000之间的随机整数,接下来每行一个随机数字,共3行,也即这3个随机数字为:
2
2
1
所以第一个小样例的输出为:
1
2
第二个小样例的第一个数字为11,也即…(类似上面的解释)…
所以第二个小样例的输出为:
10
15
20
32
40
67
89
300
400
https://github.com/davidanastasiu/coen-342-wi22
PR1: Peptide Classification
Published Date:
Jan. 12, 2020, 5:00 p.m.
Deadline Date:
Jan. 25, 2020, 11:59 p.m.
Description:
This is an individual assignment.
Overview and Assignment Goals:
The objectives of this assignment are the following:
Create feed-forward neural networks and train them using your own codes and
frameworks.
Experiment with different feature extraction techniques.
Think about dealing with imbalanced data.
Detailed Description:
Develop predictive neural networks that can determine, given an antibacterial peptide,
whether it is also an antibiofilm peptide.
“Proteins are large biomolecules, or macromolecules, consisting of one or more long
chains of amino acid residues. Proteins perform a vast array of functions within organisms,
including catalysing metabolic reactions, DNA replication, responding to stimuli, providing
structure to cells, and organisms, and transporting molecules from one location to another.
Proteins differ from one another primarily in their sequence of amino acids, which is
dictated by the nucleotide sequence of their genes, and which usually results in protein
folding into a specific three-dimensional structure that determines its activity.
A linear chain of amino acid residues is called a polypeptide. A protein contains at least
one long polypeptide. Short polypeptides, containing less than 20-30 residues, are rarely
considered to be proteins and are commonly called peptides. […] The sequence of amino
acid residues in a protein is defined by the sequence of a gene, which is encoded in the
genetic code. In general, the genetic code specifies 20 standard amino acids; […] Proteins
can also work together to achieve a particular function, and they often associate to form
stable protein complexes.” [Wikipedia, Accessed 2020-02-07,
https://en.wikipedia.org/wiki/Protein]
Biofilms are tightly-connected multicellular communities of microorganisms encased in self-
secreted extra-cellular matrices. They are currently one of the major causes of disease for
two main reasons. First, roughly 75% of all human infections are caused by biofilms.
Second, due to the robust multicellular cellular matrix structure, they are resistant both to
the host defense mechanisms and to traditional antimicrobial compounds (antibiotics).
Thus, it is important to identify peptide sequences that are not only antimicrobial (can
destroy or render inert the invading microorganism), but also antibiofilm (can penetrate the
extra-cellular matrix so it can get to the microorganism in the first place).
You have been provided with a training set (train.dat) and a test set (test.dat) consisting of
peptide sequences, one per line in the file. Peptides are encoded as strings with characters
from an alphabet of 20 characters, each representing an amino-acid residue. The training
set also includes the label for each sequence as 1 (antibiofilm) or -1 (not antibiofilm) as the
first character in each line of the training file, separated from the sequence by a tab (\t)
character.
The input to your classifiers will not be the peptides themselves, but rather features
extracted from the peptides. Two simple approaches for feature extraction are the bag-of-
words and the k-mer models you should have learned about in Data Mining or Machine
Learning, where a word is one of the amino-acids in the peptide. You should not use any
additional external data in this assignment.
Note that the dataset is imbalanced. We will Matthews’s correlation coefficient (MCC) as
evaluation metric for this assignment, which, similar to the F-1 score, combines aspects of
the result’s sensitivity and specificity. Given the normal confusion matrix resulting from
comparing the predicted and true classes of the test samples, MCC is defined as,
Programs:
You are required to write two separate programs for the classification. The first may only
use basic Python structures (from numpy or scipy) and you should implement your own
functions for training the neural network. This is also the program you will use to make CLP
submissions. In addition, you should write a second program that uses a deep learning
framework of your choice to train the neural network. The structure of the network may be
the same or different from the one you created in the first program. You will present results
from this program (which should be at least as good as those from the first program) in
your report.
Considerations:
Try extracting different features from the peptide strings.
Consider oversampling the negative class to fix the apparent imbalance.
Try out different network configurations and activation functions.
Consider regularization as a way to keep weights balanced in the network.
Data Description:
The training dataset consists of 1566 records and the test dataset consists of 392 records.
We provide you with the training class labels and the test labels are held out. Your task is
to predict those labels for the peptides in the test set and create a test.txt file containing
those labels, which you will submit to CLP. Note that CLP only accepts files with extensions
.txt or .dat for your predicted labels, and .py or .ipynb or .zip or .tgz for codes.
Rules:
This is an individual assignment. Discussion of broad level strategies are allowed but
any copying of prediction files and source codes will result in an honor code violation.
You are allowed 5 submissions per day.
After the submission deadline, only your chosen or last submission is considered for
the leaderboard.
Deliverables:
Valid submissions to the Leader Board website: https://clp.engr.scu.edu (username is
your SCU username and your password is your SCU password).
Canvas Submission for the report:
Include a 2-page, single-spaced report describing details regarding the steps you
followed for feature extraction, designing your neural network, and training your model.
The report should be in PDF format and the file should be called report.pdf. The report
needs to be structured as a technical report (title, abstract, introduction, sections,
conclusion), be free from grammatical errors, and use standard page and font sizes (letter
size page, 10 or 11 pt font). Be sure to include the following in the report:
Name and SCU ID.
Rank & MCC-score for your submission (at the time of writing the report). If
you chose not to see the leaderboard, state so.
Your approach.
Your methodology of choosing the approach and associated parameters.
Ensure you submitted the correct code on CLP that matches your output.
Zip up your report and codes for both programs in an archive called
Grading:
Grading for the Assignment will be split on your implementation (70%) and report (30%).
Extra credit (1% of final grade) will be awarded to the top-3 performing algorithms. Note
that extra credit throughout the semester will be tallied outside of Canvas and will be added
to the final grade at the end of the semester.
Files: available on Canvas.
床上躺的太久 考虑伸展你的腿部 手部 拔筋
吃饭吃太多 可以用勒肚子的方法 把裤子提高一些 这样只能吃包子 煎饼之类的 不能吃大餐 哪怕饿了一晚上也不能吃
根据知乎 应该按照节律调整光照 但是在房间里只能模拟光照
在规定时间上床睡觉 确保黑暗 安静
在快起床的时候 渐进的提高光照 提高声音音量 辅助起床
属于智能家居的范畴 智能家居代码
https://github.com/home-assistant/core
自动调灯泡亮度 智能灯泡需要手工焊接串口转USB的板子 或者用arduino 树莓派实现
https://duino4projects.com/project-auto-intensity-control-of-street-light-using-arduino/amp/
https://forum.arduino.cc/t/how-to-control-a-lamps-intensity/57081
By simply working in bed, lying down or not, using pillow or not, sore ass or not, I somehow lose sleep in bed.
Don’t know how to explain, but this is simply true. I admit many people have lost their sleep in bed too, but never comfortable to do so. I am somehow better than them.
Cannot staying here forever coding shit. The issue is that we are too absorbed to seek for challenge, forgetting the way back, forgetting how we get this far.
Cyberspace is exciting, but human does not evolve inside chips. In order to get things back on track, you need to forget about everything artificial.
No matter how smart you are, creating a place needless to move at all is dangerous and not possible. You must know when to leave and what to do without seeing code at all.
stretching in bed is possible via the handle on the desk. so elegent design isn’t it. i’ve had it long time ago.
• You must demonstrate multiple examples of inheritance.
You may need to include classes that were not mentioned to demonstrate appropriate inheritance relationships.
• You must demonstrate multiple examples of aggregation/composition/association. You may need to create lists of animals or tiles in the EcoSim class.
• You must create methods not mentioned in the text above.
• You must include one more type of animal, and one more plant in your program.
• You must demonstrate polymorphism by calling overridden methods, or by checking the types of objects.
• You must demonstrate good encapsulation by making dangerous variables private or protected and providing getter or setter methods to them.
• You must include str and repr methods for each class. You may print(self) or print(animals) to help you with debugging.
• You must update your UML diagram written in part 1 and submit it with your code.
• Your code must be documented appropriately using docstrings.
• You must make at least 10 commits using git with appropriate comments.
You must use git version control to keep track of your progress during implementation. Only a local git repository is required in this assignment. You are not required to use an online repository, but if you choose to, please make sure your online repository is private. You should perform regular commits as you implement features and fix errors. Your commit comments should be short and reflect the changes that were made.
• You must write a unit test for the game.Vector2D class.
You will find Vector2D in the game.py file. You must create a new file called test_vector2d.py and write a test method for each of the following methods in Vector2D: add, subtract, scale, length, distance, normalize. You may either use the unittest or the pytest modules. Each test method should call those methods two times using different arguments: one safe set of arguments, and one dangerous set of arguments.