Skip Navigation
fragile mortals
  • Thanks, I'm blocking this community.

  • IRulely
  • iCar

  • skramp rule
  • The pig one tho...

  • On the edge of the Lagoon
  • Btw, idk if it was on purpose but if you want people to be able to view the image directly in the app then you can just paste the direct link (https://cdn.spacetelescope.org/archives/images/screen/potw2325a.jpg) of the image to the URL section of the post.

  • The Promise and Peril of AI-Generated TV (article from 19.07.2023)
  • Just stumbled upon this post while looking for news and I think that it's a great example of why implementing this rule was a right call: https://www.reddit.com/r/singularity/comments/14u6x5p/toyota_claims_battery_breakthrough_with_a_range/

  • Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic (paper from 27.06.2023)

    >In human conversations, individuals can indicate relevant regions within a scene while addressing others. In turn, the other person can then respond by referring to specific regions if necessary. This natural referential ability in dialogue remains absent in current Multimodal Large Language Models (MLLMs). To fill this gap, this paper proposes an MLLM called Shikra, which can handle spatial coordinate inputs and outputs in natural language. Its architecture consists of a vision encoder, an alignment layer, and a LLM. It is designed to be straightforward and simple, without the need for extra vocabularies, position encoder, pre-/post-detection modules, or external plug-in models. All inputs and outputs are in natural language form. Referential dialogue is a superset of various vision-language (VL) tasks. Shikra can naturally handle location-related tasks like REC and PointQA, as well as conventional VL tasks such as Image Captioning and VQA. Experimental results showcase Shikra's promising performance. Furthermore, it enables numerous exciting applications, like providing mentioned objects' coordinates in chains of thoughts and comparing user-pointed regions similarities. Our code, model and dataset are accessed at this https URL.

    0
    Andy Jassy dismisses Microsoft and Google A.I. ‘hype cycle’ and says Amazon is starting a ‘substance cycle’ (article from 7.07.2023)

    >Amazon CEO Andy Jassy called generative A.I. “one of the biggest technical transformations of our lifetimes” in an interview with CNBC on Thursday. He also called many of today’s A.I. chatbots and other generative A.I. tools part of the “hype cycle,” declaring that Amazon was focused on the “substance cycle.” > >Amazon’s bona fides in the space are well established, having been a player in artificial intelligence and machine learning long before the ChatGPTs and Bards of the world were publicly released. Former Fortune editor Brian Dumaine wrote a book in 2020 about how Amazon founder Jeff Bezos realized early on that imbuing machine learning into every facet of the company would allow it to gather data to constantly improve itself. > >Much as it did with Amazon Web Services, which practically birthed the cloud computing industry that now powers the internet’s biggest companies, including its competitors, Amazon’s A.I. strategy is focused on cementing its position as a major player across the entirety of the A.I. supply chain. > >“Every single business unit inside of Amazon is working intensely and very broadly on generative A.I.,” Jassy says. > >Jassy shed some light on Amazon’s A.I. game plan, outlining three macro layers: the computing capabilities, the underlying models, and what Jassy refers to as the “application layer,” for example, ChatGPT or Bard.

    0
    RVT: Robotic View Transformer for 3D Object Manipulation (paper from 26.06.2023)

    >For 3D object manipulation, methods that build an explicit 3D representation perform better than those relying only on camera images. But using explicit 3D representations like voxels comes at large computing cost, adversely affecting scalability. In this work, we propose RVT, a multi-view transformer for 3D manipulation that is both scalable and accurate. Some key features of RVT are an attention mechanism to aggregate information across views and re-rendering of the camera input from virtual views around the robot workspace. In simulations, we find that a single RVT model works well across 18 RLBench tasks with 249 task variations, achieving 26% higher relative success than the existing state-of-the-art method (PerAct). It also trains 36X faster than PerAct for achieving the same performance and achieves 2.3X the inference speed of PerAct. Further, RVT can perform a variety of manipulation tasks in the real world with just a few (∼10) demonstrations per task. Visual results, code, and trained model are provided at this https URL.

    0
    How Topical Application of Stem Cell Serum Can Reverse COVID-Induced Hair Loss (article from 30.06.2023)
    bioinformant.com How Topical Application of Stem Cell Serum Can Reverse COVID-Induced Hair Loss | BioInformant

    What is Covid-related hair loss? Covid-19 is said to cause long-term side effects in up to 67% of patients, and these health consequences can include chronic fatigue, loss of taste and smell and brain fog. Increasingly common too is Covid-related hair loss. Known as telogen effluvium, this phenomeno...

    How Topical Application of Stem Cell Serum Can Reverse COVID-Induced Hair Loss | BioInformant

    >Covid-19 is said to cause long-term side effects in up to 67% of patients, and these health consequences can include chronic fatigue, loss of taste and smell and brain fog. Increasingly common too is Covid-related hair loss. Known as telogen effluvium, this phenomenon manifests as clumps of hair falling out after brushing or washing your hair. > >It’s normal to shed hair daily – we lose about 100-150 hairs each day as hair drops from follicles to make way for new hair growth. This growth cycle occurs because 90% of the hair on our heads is in a growth phase (called anagen), while the remaining 10% is in a resting phase (called telogen). Anagen lasts for about three years before transitioning into the shorter telogen phase, following which hair is shed. > >A stressful event like childbirth, certain medications, intense psychological stress and Covid-19 can trigger our bodies to shift a greater-than-normal proportion of growing anagen hairs into a resting telogen state, according to the University of Utah. > >“Covid-related hair loss can affect up to 33% of symptomatic patients and 10% of asymptomatic patients,” says a plastic surgeon who deals with hair loss patients. “And this kind of hair loss seems to be different from that induced by stress or disease as cytokines (substances secreted by the body’s immune system) appear to cause direct damage to hair follicles,” she adds. > >Covid-induced hair loss has also been reported to start earlier after the stressful event – in two months instead of the usual three.

    0
    Derivative Free Weight-space Ensembling (paper from 7.07.2023)

    >Recent work suggests that interpolating between the weights of two specialized language models can transfer knowledge between tasks in a way that multi-task learning cannot. However, very few have explored interpolation between more than two models, where each has a distinct knowledge base. In this paper, we introduce Derivative Free Weight-space Ensembling (DFWE), a new few-sample task transfer approach for open-domain dialogue. Our framework creates a set of diverse expert language models trained using a predefined set of source tasks. Next, we finetune each of the expert models on the target task, approaching the target task from several distinct knowledge bases. Finally, we linearly interpolate between the model weights using a gradient-free-optimization algorithm, to efficiently find a good interpolation weighting. We demonstrate the effectiveness of the method on FETA-Friends outperforming the standard pretrain-finetune approach.

    0
    "Excited to introduce 'GPT-Researcher'!" (Found this in a reddit post from 10.07.2023. Not sure if this an official announcement, the tool may be older if it's not.)
    github.com GitHub - assafelovic/gpt-researcher: GPT based autonomous agent that does online comprehensive research on any given topic

    GPT based autonomous agent that does online comprehensive research on any given topic - GitHub - assafelovic/gpt-researcher: GPT based autonomous agent that does online comprehensive research on an...

    GitHub - assafelovic/gpt-researcher: GPT based autonomous agent that does online comprehensive research on any given topic

    The idea is simple - Specify what you want to research, and the AI will autonomously research it for you in minutes!

    ▸ One prompt generates an unbiased, factual and in depth research report

    ▸ Generate research, outlines, resource and lessons reports

    ▸ Aggregates over 20 web sources per research

    ▸ Includes an easy to use web interface

    ▸ Open source: https://github.com/assafelovic/gpt-researcher

    ▸ Scrapes web sources with javascript support

    ▸ Keeps track and context of visited and used web sources

    0
    The second fusion of laser and aerospace — an inspiration for high energy lasers (article from 25.06.2023)

    >Abstract: > >Since the first laser was invented, the pursuit of high-energy lasers (HELs) has always been enthusiastic. The first revolution of HELs was pushed by the fusion of laser and aerospace in the 1960s, with the chemical rocket engines giving fresh impetus to the birth of gas flow and chemical lasers, which finally turned megawatt lasers from dream into reality. Nowadays, the development of HELs has entered the age of electricity as well as the rocket engines. The properties of current electric rocket engines are highly consistent with HELs’ goals, including electrical driving, effective heat dissipation, little medium consumption and extremely light weight and size, which inspired a second fusion of laser and aerospace and motivated the exploration for potential HELs. As an exploratory attempt, a new configuration of diode pumped metastable rare gas laser was demonstrated, with the gain generator resembling an electric rocket-engine for improved power scaling ability.

    0
    Focused Transformer: Contrastive Training for Context Scaling - 256k context length (paper from 6.07.2023)

    Original title: Focused Transformer: Contrastive Training for Context Scaling

    >Large language models have an exceptional capability to incorporate new information in a contextual manner. However, the full potential of such an approach is often restrained due to a limitation in the effective context length. One solution to this issue is to endow an attention layer with access to an external memory, which comprises of (key, value) pairs. Yet, as the number of documents increases, the proportion of relevant keys to irrelevant ones decreases, leading the model to focus more on the irrelevant keys. We identify a significant challenge, dubbed the distraction issue, where keys linked to different semantic values might overlap, making them hard to distinguish. To tackle this problem, we introduce the Focused Transformer (FoT), a technique that employs a training process inspired by contrastive learning. This novel approach enhances the structure of the (key, value) space, enabling an extension of the context length. Our method allows for fine-tuning pre-existing, large-scale models to lengthen their effective context. This is demonstrated by our fine-tuning of 3B and 7B OpenLLaMA checkpoints. The resulting models, which we name LongLLaMA, exhibit advancements in tasks requiring a long context. We further illustrate that our LongLLaMA models adeptly manage a 256k context length for passkey retrieval.

    0
    Test
  • There's so many test posts lately.

  • The Promise and Peril of AI-Generated TV (article from 19.07.2023)
  • It's not a commonly seen rule. I do it for transparency reasons. I will make a post about the rules and will pin it but first I will need to rewrite some rules and I need to find a time for that when I'm really busy. :x

  • The Promise and Peril of AI-Generated TV (article from 19.07.2023)
  • Add the date of the article to the title as per our rule 6. Copy this:

    (article from 19.07.2023)

    Thank you.

  • *Permanently Deleted*
  • You mean my post

  • Anyone having trouble uploading images lately?
  • Do you mind if I use the word "sublemmy" in the announcement? People HATE this word and it will potentially kill off the announcement because 1/3 people will downvote it. xD

  • Introducing AnthropicAI's Claude 2 (Announcement from 11.07.2023)
  • You're right, seems like it:

    Currently, Claude 2 API is available to businesses only. Additionally, to gain access, you need to send a request to the Anthropic team.

    However, if you live anywhere except the US or UK, you cannot use Claude 2.

    Source: https://thenaturehero.com/claude-2-api/ ("Claude 2 API – Everything You Need To Know")

  • Introducing AnthropicAI's Claude 2 (Announcement from 11.07.2023)
  • We are pleased to announce Claude 2, our new model. Claude 2 has improved performance, longer responses, and can be accessed via API as well as a new public-facing beta website, claude.ai.

  • GPT-4 details leaked (Leak from ~10.07.2023)

    I just copy/pasted what's in the link so formatting may be broken:

    >GPT-4's details are leaked. > >It is over. > >Everything is here: twitter.com/i/web/status/1… Parameters count: > >GPT-4 is more than 10x the size of GPT-3. We believe it has a total of ~1.8 trillion parameters across 120 layers. Mixture Of Experts - Confirmed. > >OpenAI was able to keep costs reasonable by utilizing a mixture of experts (MoE) model. They utilizes 16 experts within their model, each is about ~111B parameters for MLP. 2 of these experts are routed to per forward pass. MoE Routing: > >While the literature talks a lot about advanced routing algorithms for choosing which experts to route each token to, OpenAI’s is allegedly quite simple, for the current GPT-4 model. > >There roughly ~55B shared parameters for attention. Inference: > >Each forward pass inference (generation of 1 token) only utilizes ~280B parameters and ~560 TFLOPs. This contrasts with the ~1.8 trillion parameters and ~3,700 TFLOP that would be required per forward pass of a purely dense model. Dataset: > >GPT-4 is trained on ~13T tokens. > >These are not unique tokens, they count the epochs as more tokens as well. > >Epoch number: 2 epochs for text-based data and 4 for code-based data. > >There is millions of rows of instruction fine-tuning data from ScaleAI & internally. GPT-4 32K > >There was an 8k context length (seqlen) for the pre-training phase. The 32k seqlen version of GPT-4 is based on fine-tuning of the 8k after the pre-training. Batch Size: > >The batch size was gradually ramped up over a number of days on the cluster, but by the end, OpenAI was using a batch size of 60 million! This, of course, is “only” a batch size of 7.5 million tokens per expert due to not every expert seeing all tokens. For the real batch size: Divide this number by the seq len to get the real batch size. just stop with this misleading numbers already. Parallelism Strategies > >To parallelize across all their A100s GPUs They utilized 8-way tensor parallelism as that is the limit for NVLink. > >Beyond that, they are using 15-way pipeline parallelism. > >(likely used ZeRo Stage 1. It is possible they used block-level FSDP) Training Cost > >OpenAI’s training FLOPS for GPT-4 is ~2.15e25, on ~25,000 A100s for 90 to 100 days at about 32% to 36% MFU. > >Part of this extremely low utilization is due to an absurd number of failures requiring checkpoints that needed to be restarted from. If their cost in the cloud was about $1 per A100 hour, the training costs for this run alone would be about $63 million. > >(Today, the pre-training could be done with ~8,192 H100 in ~55 days for $21.5 million at $2 per H100 hour.) Mixture of Expert Tradeoffs > >There are multiple MoE tradeoffs taken: For example, MoE is incredibly difficult to deal with on inference because not every part of the model is utilized on every token generation. This means parts may sit dormant when other parts are being used. When serving users, this really hurts utilization rates. > >Researchers have shown that using 64 to 128 experts achieves better loss than 16 experts, but that’s purely research. There are multiple reasons to go with fewer experts. One reason for OpenAI choosing 16 experts is because more experts are difficult to generalize at many tasks. More experts can also be more difficult to achieve convergence with. With such a large training run, OpenAI instead chose to be more conservative on the number of experts. GPT-4 Inference Cost > >GPT-4 costs 3x that of the 175B parameter Davinchi. This is largely due to the larger clusters required for GPT-4 and much lower utilization achieved. AN estimate of it's costs is $0.0049 cents per 1k tokens for 128 A100s to inference GPT-4 8k seqlen and $0.0021 cents per 1k tokens for 128 H100’s to inference GPT-4 8k seqlen. It should be noted, we assume decent high utilization, and keeping batch sizes high. Multi-Query Attention > >OpenAI are using MQA just like everybody else. Because of that only 1 head is needed and memory capacity can be significantly reduced for the KV cache. Even then, the 32k seqlen GPT-4 definitely cannot run on 40GB A100s, and the 8k is capped on max bsz. Continuous batching > >OpenAI implements both variable batch sizes and continuous batching. This is so as to allow some level of maximum latency as well optimizing the inference costs. Vision Multi-Modal > >It is a separate vision encoder from the text encoder, with cross-attention. The architecture is similar to Flamingo. This adds more parameters on top of the 1.8T of GPT-4. It is fine-tuned with another ~2 trillion tokens, after the text only pre-training. On the vision model, OpenAI wanted to train it from scratch, but it wasn’t mature enough, so they wanted to derisk it by starting with text. One of the primary purposes of this vision capability is for autonomous agents able to read web pages and transcribe what’s in images and video. Some of the data they train on is joint data (rendered LaTeX/text), screen shots of web page, youtube videos: sampling frames, and run Whisper around it to get transcript. > >[Dont want to say "I told you so" but..] Speculative Decoding > >OpenAI might be using speculative decoding on GPT-4's inference. (not sure 100%) > >The idea is to use a smaller faster model to decode several tokens in advance, and then feeds them into a large oracle model as a single batch. If the small model was right about its predictions – the larger model agrees and we can decode several tokens in a single batch. But if the larger model rejects the tokens predicted by the draft model then the rest of the batch is discarded. And we continue with the larger model. The conspiracy theory that the new GPT-4 quality had been deteriorated might be simply because they are letting the oracle model accept lower probability sequences from the speculative decoding model. Inference Architecture > >The inference runs on a cluster of 128 GPUs. > >There are multiple of these clusters in multiple datacenters in different locations. > >It is done in 8-way tensor parallelism and 16-way pipeline parallelism. > >Each node of 8 GPUs has only ~130B parameters, or… twitter.com/i/web/status/1… The model has 120, so it fits in 15 different nodes. [Possibly the there are less layers on the first node since it needs to also compute the embeddings] According to these numbers: OpenAI should have trained on 2x the tokens if they were trying to go by chinchilla's optimal. > >[let alone surpass it like we do] > >This goes to show that they are struggling to get high quality data. Why no FSDP? > >A possible reason for this could be that some of the hardware infra they secured is of an older generation. > >This is pretty common at local compute clusters as the organisation usually upgrade the infra in several "waves" to avoid a complete pause of operation.… twitter.com/i/web/status/1… Dataset Mixture > >They trained on 13T tokens. CommonCrawl & RefinedWeb are both 5T. > >Remove the duplication of tokens from multiple epochs and we get to a much reasonable number of "unaccounted for" tokens: The "secret" data. Which by this point we already get rumors that parts of it came from twitter, reddit & youtube. > >[Rumors that start to become lawsuits] > >Some speculations are: >- LibGen (4M+ books) >- Sci-Hub (80M+ papers) >- All of GitHub > >My own opinion: > >The missing dataset it a custom dataset of college textbooks collected by hand for as much courses as possible. > >This is very easy to convert to txt file and than with self-instruct into instruction form. This creates the "illusion" that GPT-4 "is smart" no matter who use it. > >Computer scientist? sure! it can help you with your questions about P!=NP Philosophy major? It can totally talk to you about epistemology. > >Don't you see? It was trained on the textbooks. It is so obvious. There are also papers that try to extract by force memorized parts of books from GPT-4 to understand what it trained on. > >There are some books it knows so well that it had seen them for sure. > >Moreover, If i remember correctly: It even know the unique ids of project Euler exes.

    2
    Introducing Llama 2 - Meta's Next-Generation Commercially Viable Open-Source AI & LLM (paper from 18.07.2023)
  • Please add the date of the source to the tile as per our rule 6. Copy this:

    (paper from 18.07.2023)

    Thank you.

  • hmmm
  • And he damn does!

  • Password reset broken on lemmy.fmhy.ml
  • Naw, this platform is still very buggy so I doubt that it's your fault.

  • Martineski Martineski @lemmy.fmhy.ml

    A fellow ADHDer addicted to the platform

    Schedule here: lemmy.fmhy.ml/post/301360

    ! !

    Posts 603
    Comments 687