Skip Navigation
🔍 Search Reddit Communities - Echo v1.6

Echo v1.6 has been released with support for searching Reddit communities.

Simply navigate to the Explore tab and enter a subreddit name (e.g., r/technology) or a link to the subreddit to find Lemmy community alternatives.

This is only the beginning. Stay tuned for more!

Below are the full release notes.

```

  • Introducing: Search Reddit Communities
    • Have a favorite community on Reddit and want to find similar Lemmy communities? Simply search for the community in the Explore tab, and see similar Lemmy communities.
    • For example, search for r/apple or r/worldnews.
  • General bug fixes, performance improvements, and behind the scenes improvements. ```

!Screenshot of a search in Echo showing the query “r/technology” in the search bar. Under the “Communities” section, a single result appears: a community named “Technology” with a yellow hexagon icon featuring a black microchip symbol.

0
When building a home server, could a used/cheap PC do the job?
  • It really depends on what you're trying to do. At the end of the day, the foundational components are pretty standard across the board. All machines have a CPU, motherboard, storage mechanism, etc. Oftentimes those actual servers have a form factor better suited for rack mounting. They often have more powerful components.

    But at the end of the day, the difference isn't as striking as most people not aware of this stuff think.

    I'd say considering this is your first experience, you should start with converting an old PC due to the lower price point, and then expand as needed. You'll learn a lot and get a lot of experience from starting there.

  • Netflix says its brief Apple TV app integration was a mistake
  • This is not a “mistake”. This clearly proves they have Apple TV app integration implemented (just turned off). And someone accidentally turned it on.

    But they have clearly put in effort and work into adding this functionality.

    New functionality doesn’t just happen by mistake.

  • Very inconsistent machine learning model training
  • Got it. Thanks so much for your help!! Still a lot to learn here.

    Coming from a world of building software where things are very binary (it works or it doesn't), it's also really tough to judge how good is "good enough". There is a point of diminishing returns, and not sure at what point to say that it's good enough vs continuing to learn and improve it.

    Really appreciate your help here tho.

  • Very inconsistent machine learning model training
  • So someone else suggested to reduce the learning rate. I tried that and at least to me it looks a lot more stable between runs. All the code is my original code (none of the suggestions you made) but I reduced the learning rate to 0.00001 instead of 0.0001.

    Not quite sure what that means exactly tho. Or if more adjustments are needed.

    As for the confusion matrix. I think the issue is the difference between smoothed values in TensorBoard vs the actual values. But I just ran it again with the previous values to verify. It does look like it matches up if you look at the actual value instead of the smoothed value.

  • Very inconsistent machine learning model training
  • Sorry for the delayed reply. I really appreciate your help so far.

    Here is the raw link to the confusion matrix: https://eventfrontier.com/pictrs/image/1a2bc13e-378b-4920-b7f6-e5b337cd8c6f.webm

    I changed it to keras.layers.Conv2D(16, 10, strides=(5, 5), activation='relu'). Dense units still at 64.

    And in case the confusion matrix still doesn't work, here is a still image from the last run.

    EDIT: The wrong image was uploaded originally.

  • Very inconsistent machine learning model training
  • Ok I changed the Conv2D layer to be 10x10. I also changed the dense units to 64. Here is just a single run of that with a Confusion Matrix.

    I don't really see a bias towards non-blurred images.

  • Very inconsistent machine learning model training
  • So does the fact that they aren't converging near the same point indicate there is a problem with my architecture and model design?

  • Very inconsistent machine learning model training
  • Got it. I'll try with some more values and see what that leads to.

    So does that mean my learning rate might be too high and it's overshooting the optimal solution sometimes based on those random weights?

  • Very inconsistent machine learning model training
  • I think what you’re referring to with iterating through algorithms and such is called hyper parameter tuning. I think there is a tool called Keras Tuner you can use for this.

    However. I’m incredibly skeptical that will work in this situation because of how variable the results are between runs. I run it with the same input, same code, everything, and get wildly different results. So I think in order for that to be effective it needs to be fairly consistent between runs.

    I could be totally off base here tho. (I haven’t worked with this stuff a ton yet).

  • Very inconsistent machine learning model training
  • Thanks so much for the reply!

    The convolution size seems a little small

    I changed this to 5 instead of 3, and hard to tell if that made much of an improvement. It still is pretty inconsistent between training runs.

    If it doesn’t I’d look into reducing the number of filters or the dense layer. Reducing the available space can force an overfitting network to figure out more general solutions

    I'll try reducing the dense layer from 128 to 64 next.

    Lastly, I bet someone else has either solved the same problem as an exercise or something similar and you could check out their network architecture to see if your solution is in the ballpark of something that works

    This is a great idea. I did a quick Google search and nothing stood out to start. But I'll dig deeper more.


    It's still super weird to me that with zero changes how variable it can be. I don't change anything, and one run it is consistently improving for a few epochs, the next run it's a lot less accurate to start and declines after the first epoch.

  • Very inconsistent machine learning model training

    I'm trying to train a machine learning model to detect if an image is blurred or not.

    I have 11,798 unblurred images, and I have a script to blur them and then use that to train my model.

    However when I run the exact same training 5 times the results are wildly inconsistent (as you can see below). It also only gets to 98.67% accuracy max.

    I'm pretty new to machine learning, so maybe I'm doing something really wrong. But coming from a software engineering background and just starting to learn machine learning, I have tons of questions. It's a struggle to know why it's so inconsistent between runs. It's a struggle to know how good is good enough (ie. when should I deploy the model). It's a struggle to know how to continue to improve the accuracy and make the model better.

    Any advice or insight would be greatly appreciated.

    View all the code: https://gist.github.com/fishcharlie/68e808c45537d79b4f4d33c26e2391dd

    !

    17
    Instance owners, how did you get your server online?
  • That’s attached to the instance? Do you have a screenshot maybe?

  • Instance owners, how did you get your server online?
  • What is the error that you get?

  • 📸 Image Post Support - Echo v1.5

    Echo v1.5 has been released with support for uploading images as part of your Echo post.

    This release also includes support for Apple's new Apple Intelligence Image Playground feature. Generate new images based on an existing image or create a brand new image of anything you want. Echo has also created a new Lemmy community for Image Playground Showcase: !ip_showcase@eventfrontier.com. Be sure to join and post all your best (or worst) Image Playground images.

    We aren't done yet! Much more coming soon. We are committed to building the best, native first, Lemmy experience for iOS.

    Below are the full release notes.

    ```

    • Support for uploading image in post.
      • Requires your Lemmy instance to support pictrs at the default URL.
    • Adds support for Image Playground. Generate image from an existing image or create a new image using Apple Intelligence.
      • Requires iOS 18.2, an Apple Intelligence supported device, and access to be granted to Image Playground.
    • Fixes an issue where reply & bookmark options wouldn't show up in comment ellipsis menu.
    • Fixes an issue where sometimes when closing a screen multiple screens would close unintentionally.
    • Fixes an issue where sometimes avatar images would not display correctly.
    • General bug fixes, performance improvements, and behind the scenes improvements. ```
    0
    Does my instance federate with communities, or does my user?
  • Yes. It just will fill your feed with a bunch of things you might not care about. But admin vs non admin doesn’t matter in the context of what I said.

  • Does my instance federate with communities, or does my user?
  • Your instance is the one that federates. However it starts with a user subscribing to that content. Your instance won’t federate normally without user interaction.

    Normally the solution for the second part is relays. But that isn’t something Lemmy supports currently. This issue is very common with smaller instances. It isn’t as big of a deal with bigger instances since users are more likely to have subscribed to more communities that will automatically be federated to your instance. You could experiment with creating a user and subscribing to a bunch of communities so they get federated to your instance.

  • *Permanently Deleted*
  • It’s not really any different than hosting any other service.

  • Do people actually use Mastodon for something else than posting cats and hiking photos?
  • I was lucky to get in in the early days when posting Mastodon handles on Twitter was common so was able to easily migrate. But this is a problem with ActivityPub right now I feel like. Discovery algorithms can be awful in the timeline, but so useful for finding people/communities to follow.

  • Eve Energy smart plugs transmit Energy information via Matter
  • Yep just saw that too after I researched it a bit more. What is strange is I don't remember Eve Energy having a firmware update since then. Makes me wonder if they had it ready to go in previous firmware versions based on internal specs they saw? Or maybe I just forgot about a firmware update I did.

  • apnews.com Hurricane-stricken Tampa Bay Rays to play 2025 season at Yankees' spring training field in Tampa

    The Tampa Bay Rays will play their 2025 home games at the New York Yankees’ nearby spring training ballpark amid uncertainty about the future of hurricane-damaged Tropicana Field.

    Hurricane-stricken Tampa Bay Rays to play 2025 season at Yankees' spring training field in Tampa
    1
    Eve Energy smart plugs transmit Energy information via Matter
  • but as the Matter standard doesn't yet support energy monitoring, users are limited to basic features like on and off and scheduling

    - from this link

    Granted the article is almost a year old. But I just didn't realize that Matter now supports energy monitoring. Somehow I just missed that news.

  • Eve Energy smart plugs transmit Energy information via Matter

    I just learned that the Eve Energy smart plugs transmit energy consumption information via Matter. I didn't think energy consumption information was supported in Matter yet, but it is.

    This makes them incredible to use with the Home Assistant Energy dashboard.

    Even tho I was hesitant for a while, I took the leap to using the Matter beta Home Assistant integration and no issues so far.

    5
    Multiple Account Support is here! - Echo 1.4

    Super happy to announce the release of multiple account support in Echo v1.4! Easily change between Lemmy accounts (even across multiple instances/servers) in Echo without having to logout of your existing account.

    The full release notes are listed below.

    ```

    • Multiple Account support!
      • Do you have multiple Lemmy accounts? Maybe across multiple instances? Well now you can sign into all of them in Echo without having to logout of your existing account.
      • Requires Echo+ subscription.
    • Fixes issue where community list would flash results when opening.
    • Adds loading indicator to Explore page after searching.
    • Lemmy 0.19.6 support & improvements.
    • Fixes issue where in rare cases deleted/removed communities would show in the community list.
    • Vast performance improvements.
    • More behind the scenes improvements than we can count. ```
    0
    9to5mac.com Apple teams up with airlines for new ‘Share Item Location’ AirTags feature in iOS 18.2 - 9to5Mac

    In the latest beta of iOS 18.2, Apple upgraded the Find My app with support for sharing a link to...

    Apple teams up with airlines for new ‘Share Item Location’ AirTags feature in iOS 18.2 - 9to5Mac
    0
    Best way to determine if a Lemmy server has a pictrs server?

    It seems like running a pictrs server is optional when running Lemmy. I'm trying to figure out if a given instance supports pictrs.

    I see in the documentation for pictrs, there is a GET /healthz endpoint. However when I try to access https://lemmy.ml/pictrs/healthz for example it gives me a 404. Even tho I know that Lemmy.ml has a pictrs server.

    What is the best way to determine if a Lemmy server has pictrs?

    0
    Obtaining Certification ID's
  • I'm not aware of any official Ubiquiti certifications. Maybe it was a 3rd party certification? Someone else might know more than I do tho.

  • The Fediverse Desperately Needs Sustainable File Hosting
  • I know I'm not necessarily the target audience for this. But it feels too expensive. 6x the price of Cloudflare R2, almost 13x the price of Wasabi. Even iCloud storage is $0.99 for 50 GB with a 5 GB free tier. But again, I know I'm not necessarily the target audience as I have a lot of technical skills that maybe average users don't have.

    If you ever get around to building an API, and are interested in partnerships, let me know. Maybe there is a possibility for integration into !echo@eventfrontier.com 😉.

  • coremltools Error: ValueError: perm should have the same length as rank(x): 3 != 2

    cross-posted from: https://eventfrontier.com/post/177049

    > I keep getting an error ValueError: perm should have the same length as rank(x): 3 != 2 when trying to convert my model using coremltools. > > From my understanding the most common case for this is when your input shape that you pass into coremltools doesn't match your model input shape. However, as far as I can tell in my code it does match. I also added an input layer, and that didn't help either. > > I have put a lot of effort into reducing my code as much as possible while still giving a minimal complete verifiable example. However, I'm aware that the code is still a lot. Starting at line 60 of my code is where I create my model, and train it. > > I'm running this on Ubuntu, with NVIDIA setup with Docker. > > Any ideas what I'm doing wrong? > > --- > > python > from typing import TypedDict, Optional, List > import tensorflow as tf > import json > from tensorflow.keras.optimizers import Adam > import numpy as np > from sklearn.utils import resample > import keras > import coremltools as ct > > # Simple tokenizer function > word_index = {} > index = 1 > def tokenize(text: str) -> list: > global word_index > global index > words = text.lower().split() > sequences = [] > for word in words: > if word not in word_index: > word_index[word] = index > index += 1 > sequences.append(word_index[word]) > return sequences > > def detokenize(sequence: list) -> str: > global word_index > # Filter sequence to remove all 0s > sequence = [int(index) for index in sequence if index != 0.0] > words = [word for word, index in word_index.items() if index in sequence] > return ' '.join(words) > > # Pad sequences to the same length > def pad_sequences(sequences: list, max_len: int) -> list: > padded_sequences = [] > for seq in sequences: > if len(seq) > max_len: > padded_sequences.append(seq[:max_len]) > else: > padded_sequences.append(seq + [0] * (max_len - len(seq))) > return padded_sequences > > class PreprocessDataResult(TypedDict): > inputs: tf.Tensor > labels: tf.Tensor > max_len: int > > def preprocess_data(texts: List[str], labels: List[int], max_len: Optional[int] = None) -> PreprocessDataResult: > tokenized_texts = [tokenize(text) for text in texts] > if max_len is None: > max_len = max(len(seq) for seq in tokenized_texts) > padded_texts = pad_sequences(tokenized_texts, max_len) > > return PreprocessDataResult({ > 'inputs': tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)), > 'labels': tf.convert_to_tensor(np.array(labels, dtype=np.int32)), > 'max_len': max_len > }) > > # Define your model architecture > def create_model(input_shape: int) -> keras.models.Sequential: > model = keras.models.Sequential() > > model.add(keras.layers.Input(shape=(input_shape,), dtype='int32', name='embedding_input')) > model.add(keras.layers.Embedding(input_dim=10000, output_dim=128)) # `input_dim` represents the size of the vocabulary (i.e. the number of unique words in the dataset). > model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=64, return_sequences=True))) > model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=32))) > model.add(keras.layers.Dense(units=64, activation='relu')) > model.add(keras.layers.Dropout(rate=0.5)) > model.add(keras.layers.Dense(units=1, activation='sigmoid')) # Output layer, binary classification (meaning it outputs a 0 or 1, false or true). The sigmoid function outputs a value between 0 and 1, which can be interpreted as a probability. > > model.compile( > optimizer=Adam(), > loss='binary_crossentropy', > metrics=['accuracy'] > ) > > return model > > # Train the model > def train_model( > model: tf.keras.models.Sequential, > train_data: tf.Tensor, > train_labels: tf.Tensor, > epochs: int, > batch_size: int > ) -> tf.keras.callbacks.History: > return model.fit( > train_data, > train_labels, > epochs=epochs, > batch_size=batch_size, > callbacks=[ > keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5), > keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=1), > # When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from `./best_model.keras` to `./best_model.tf` > keras.callbacks.ModelCheckpoint(filepath='./best_model.tf', monitor='val_accuracy', save_best_only=True) > ] > ) > > # Example usage > if __name__ == "__main__": > # Check available devices > print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) > > with tf.device('/GPU:0'): > print("Loading data...") > data = (["I love this!", "I hate this!"], [0, 1]) > rawTexts = data[0] > rawLabels = data[1] > > # Preprocess data > processedData = preprocess_data(rawTexts, rawLabels) > inputs = processedData['inputs'] > labels = processedData['labels'] > max_len = processedData['max_len'] > > print("Data loaded. Max length: ", max_len) > > # Save word_index to a file > with open('./word_index.json', 'w') as file: > json.dump(word_index, file) > > model = create_model(max_len) > > print('Training model...') > train_model(model, inputs, labels, epochs=1, batch_size=32) > print('Model trained.') > > # When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from `./best_model.keras` to `./best_model.tf` > model.load_weights('./best_model.tf') > print('Best model weights loaded.') > > # Save model > # I think that .h5 extension allows for converting to CoreML, whereas .keras file extension does not > model.save('./toxic_comment_analysis_model.h5') > print('Model saved.') > > my_saved_model = tf.keras.models.load_model('./toxic_comment_analysis_model.h5') > print('Model loaded.') > > print("Making prediction...") > test_string = "Thank you. I really appreciate it." > tokenized_string = tokenize(test_string) > padded_texts = pad_sequences([tokenized_string], max_len) > tensor = tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)) > predictions = my_saved_model.predict(tensor) > print(predictions) > print("Prediction made.") > > > # Convert the Keras model to Core ML > coreml_model = ct.convert( > my_saved_model, > inputs=[ct.TensorType(shape=(max_len,), name="embedding_input", dtype=np.int32)], > source="tensorflow" > ) > > # Save the Core ML model > coreml_model.save('toxic_comment_analysis_model.mlmodel') > print("Model successfully converted to Core ML format.") > > > Code including Dockerfile & start script as GitHub Gist: https://gist.github.com/fishcharlie/af74d767a3ba1ffbf18cbc6d6a131089

    0
    coremltools Error: ValueError: perm should have the same length as rank(x): 3 != 2

    I keep getting an error ValueError: perm should have the same length as rank(x): 3 != 2 when trying to convert my model using coremltools.

    From my understanding the most common case for this is when your input shape that you pass into coremltools doesn't match your model input shape. However, as far as I can tell in my code it does match. I also added an input layer, and that didn't help either.

    I have put a lot of effort into reducing my code as much as possible while still giving a minimal complete verifiable example. However, I'm aware that the code is still a lot. Starting at line 60 of my code is where I create my model, and train it.

    I'm running this on Ubuntu, with NVIDIA setup with Docker.

    Any ideas what I'm doing wrong?

    ---

    ```python from typing import TypedDict, Optional, List import tensorflow as tf import json from tensorflow.keras.optimizers import Adam import numpy as np from sklearn.utils import resample import keras import coremltools as ct

    Simple tokenizer function

    word_index = {} index = 1 def tokenize(text: str) -> list: global word_index global index words = text.lower().split() sequences = [] for word in words: if word not in word_index: word_index[word] = index index += 1 sequences.append(word_index[word]) return sequences

    def detokenize(sequence: list) -> str: global word_index # Filter sequence to remove all 0s sequence = [int(index) for index in sequence if index != 0.0] words = [word for word, index in word_index.items() if index in sequence] return ' '.join(words)

    Pad sequences to the same length

    def pad_sequences(sequences: list, max_len: int) -> list: padded_sequences = [] for seq in sequences: if len(seq) > max_len: padded_sequences.append(seq[:max_len]) else: padded_sequences.append(seq + [0] * (max_len - len(seq))) return padded_sequences

    class PreprocessDataResult(TypedDict): inputs: tf.Tensor labels: tf.Tensor max_len: int

    def preprocess_data(texts: List[str], labels: List[int], max_len: Optional[int] = None) -> PreprocessDataResult: tokenized_texts = [tokenize(text) for text in texts] if max_len is None: max_len = max(len(seq) for seq in tokenized_texts) padded_texts = pad_sequences(tokenized_texts, max_len)

    return PreprocessDataResult({ 'inputs': tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)), 'labels': tf.convert_to_tensor(np.array(labels, dtype=np.int32)), 'max_len': max_len })

    Define your model architecture

    def create_model(input_shape: int) -> keras.models.Sequential: model = keras.models.Sequential()

    model.add(keras.layers.Input(shape=(input_shape,), dtype='int32', name='embedding_input')) model.add(keras.layers.Embedding(input_dim=10000, output_dim=128)) # input_dim represents the size of the vocabulary (i.e. the number of unique words in the dataset). model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=64, return_sequences=True))) model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=32))) model.add(keras.layers.Dense(units=64, activation='relu')) model.add(keras.layers.Dropout(rate=0.5)) model.add(keras.layers.Dense(units=1, activation='sigmoid')) # Output layer, binary classification (meaning it outputs a 0 or 1, false or true). The sigmoid function outputs a value between 0 and 1, which can be interpreted as a probability.

    model.compile( optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'] )

    return model

    Train the model

    def train_model( model: tf.keras.models.Sequential, train_data: tf.Tensor, train_labels: tf.Tensor, epochs: int, batch_size: int ) -> tf.keras.callbacks.History: return model.fit( train_data, train_labels, epochs=epochs, batch_size=batch_size, callbacks=[ keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5), keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=1), # When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from ./best_model.keras to ./best_model.tf keras.callbacks.ModelCheckpoint(filepath='./best_model.tf', monitor='val_accuracy', save_best_only=True) ] )

    Example usage

    if name == "main": # Check available devices print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

    with tf.device('/GPU:0'): print("Loading data...") data = (["I love this!", "I hate this!"], [0, 1]) rawTexts = data[0] rawLabels = data[1]

    # Preprocess data processedData = preprocess_data(rawTexts, rawLabels) inputs = processedData['inputs'] labels = processedData['labels'] max_len = processedData['max_len']

    print("Data loaded. Max length: ", max_len)

    # Save word_index to a file with open('./word_index.json', 'w') as file: json.dump(word_index, file)

    model = create_model(max_len)

    print('Training model...') train_model(model, inputs, labels, epochs=1, batch_size=32) print('Model trained.')

    # When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from ./best_model.keras to ./best_model.tf model.load_weights('./best_model.tf') print('Best model weights loaded.')

    # Save model # I think that .h5 extension allows for converting to CoreML, whereas .keras file extension does not model.save('./toxic_comment_analysis_model.h5') print('Model saved.')

    my_saved_model = tf.keras.models.load_model('./toxic_comment_analysis_model.h5') print('Model loaded.')

    print("Making prediction...") test_string = "Thank you. I really appreciate it." tokenized_string = tokenize(test_string) padded_texts = pad_sequences([tokenized_string], max_len) tensor = tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)) predictions = my_saved_model.predict(tensor) print(predictions) print("Prediction made.")

    # Convert the Keras model to Core ML coreml_model = ct.convert( my_saved_model, inputs=[ct.TensorType(shape=(max_len,), name="embedding_input", dtype=np.int32)], source="tensorflow" )

    # Save the Core ML model coreml_model.save('toxic_comment_analysis_model.mlmodel') print("Model successfully converted to Core ML format.") ```

    Code including Dockerfile & start script as GitHub Gist: https://gist.github.com/fishcharlie/af74d767a3ba1ffbf18cbc6d6a131089

    0
    TensorFlow Lemmy Community
    eventfrontier.com TensorFlow - EventFrontier

    Discussion, questions, news, and more about the TensorFlow [https://www.tensorflow.org] machine learning library.

    TensorFlow - EventFrontier

    I created a Lemmy community specifically for TensorFlow! Check it out and subscribe if you're interested.

    2
    Echo Status Update - early November 2024

    I wanted to provide the community with a quick status update on the development of Echo. This is the longest stretch without an update since Echo was released. This is mostly because I'm currently working on roughly 6+ major new features for Echo that are all in varying stages of completion. (Also because this past week my computer was being repaired, so that took away from being able to work on Echo).

    I hope to wrap up at least 1 of these features and get that shipped hopefully this coming week.

    Overtime I do anticipate release frequency to slow down. But as part of my goal to build the best Lemmy client for iOS, releases will still occur will regular frequency.

    Thank you to everyone who has downloaded the app so far. And to everyone who has given feedback, I really appreciate it. All of your feedback has been heard, and I'm actively working to implement most of it into the application. Stay tuned!

    0
    Were there major performance improvements between 2.12.0 and 2.18.0?

    I had to downgrade from TensorFlow 2.18.0 to 2.12.0 recently so that I can turn my model into a CoreML model. And coremltools only supports TensorFlow 2.12.0.

    After doing that, training my model is taking roughly 3-4x longer than it did on 2.18.0.

    0
    Dodgers take Game 2 as series shifts to NY

    Dodgers beat the Yankees 4-2 as the series shifts to Yankee Stadium.

    Are the Yankees in desperation mode yet? Judge doesn’t look good at the plate.

    1
    fishcharlie Charlie Fish @eventfrontier.com

    Software Engineer (iOS - ForeFlight) 🖥📱, student pilot ✈️, HUGE Colorado Avalanche fan 🥅, entrepreneur (rrainn, Inc.) ⭐️ https://charlie.fish/

    Posts 59
    Comments 63
    Moderates